Developing technology is like driving a race car: You push the machinery as fast as it’ll go, and if you can avoid a crash, a prize awaits you at the finish line. For engineers, the reward is sometimes monetary, but more often it’s the satisfaction of seeing the world become a better place.
Thanks to many such engineers driving many such race cars, a lot of progress is about to happen in an unexpected spot: the electricity sector. The power grid’s interlocking technological, economic, and regulatory underpinnings were established about a century ago and have undergone only minimal disruption in the decades since. But now the industry is facing massive change.
Most observers are only vaguely aware of the magnitude of this overhaul, perhaps because it’s a hard story to tell. It doesn’t translate well to a set of tweets. Many people have come to think of the electric-utility business in much the same way they think of their taxes—boring, tedious, and somehow, always costing more money.
What’s happening in this industry stems from technology improvements, economic forces, and evolving public priorities. As the changes dig away at the very foundation of the electricity sector, the results are likely to be anything but boring. Yet they may well cost you more money.
For about a century, affordable electrification has been based on economies of scale, with large generating plants producing hundreds or thousands of megawatts of power, which is sent to distant users through a transmission and distribution grid. Today, many developments are complicating that simple model.
At the top of the list is the availability of low-cost natural gas and solar power. Generators based on these resources can be built much closer to customers. So we are now in the early stages of an expansion of distributed generation, which is already lessening the need for costly long-distance transmission. That, in turn, is making those new sources cost competitive with giant legacy power plants.
Distributed generation has long been technically possible. What’s new now is that we are nearing a tipping point, beyond which, for many applications, distributed generation will be the least costly way to provide electricity.
While it certainly helps, the declining cost of renewables and gas-fired electricity is not all that’s spurring this change. To be competitive, the entire distributed system will have to work well as a whole. Quite a few technological advances are coming together to make that possible: advanced control systems; more compact, smarter, and efficient electrical inverters; smart electricity meters and the burgeoning Internet of Things; and the ever-growing ability to extract actionable information from big data.
Amid this changing scene, a picture is beginning to emerge of what a typical electrical grid may well look like in 10 or 20 years in most of the developed world. Yes, generation will be much more decentralized, and renewables such as solar and wind will proliferate. But other aspects are also shifting. For example, the distribution network—the part of the grid to which your home and business connect—will likely become more of a negotiating platform than a system that just carries electricity from place to place.
Getting to this more sophisticated grid won’t be easy. Nevertheless, it’s coming. What will it look like? Here is my best guess, based on my decades of experience as a government official charged with helping electric utilities get access to emerging technologies. It is the future I’m now working to help realize as an academic researcher.
The first thing to understand is that decentralization is going to be neither simple nor universal. In some places, decentralization will prevail, with most customers generating much of their own power, typically from solar photovoltaics. Others might use small-scale wind turbines. In regions where sunlight and wind are less plentiful, natural gas will probably predominate. Intertwined among all of those, a continuously improving version of the legacy grid will survive for decades to come.
According to the U.S. Energy Information Administration (EIA), in the first 11 months of 2016, some 48.82 million megawatt-hours of distributed solar energy were produced in the country, up 46 percent from the year before. That’s still a tiny proportion, though. In 2016, about 1.4 percent of electricity in the United States came from the sun via solar panels, including both utility-scale plants and distributed ones, according to the EIA. But solar is growing fast because of its increasingly favorable economics. For example, in Chile’s most recent power auction, 120 MW of solar power was the lowest-cost option, at US $29.10 per megawatt-hour.
Many analysts expect that grid-connected, distributed solar power will be fully cost competitive with conventional forms of generation by the end of this decade. In the meantime, a dizzying array of government incentives, which vary from region to region (even within one country) are helping the technology to take off.
Ultimately, the lowest-cost form of generation will dominate. But figuring out what the lowest-cost option actually is will be tricky because it will depend on both local conditions and local decisions.
For example, regulators are increasingly convinced that the burning of fossil fuels leads to significant societal costs, both from the direct exposure of those living near some power plants to their noxious emissions and from greenhouse gas induced climate change. Historically, these costs were difficult to quantify. So they were typically borne not by the producers or consumers of the electricity but by the victims—for example, farmers whose crops were damaged.
There is growing public interest in understanding the true cost of pollution and possibly shifting more of it to electricity producers and possibly consumers as well. Fortunately, we now have the modeling and computational capabilities to begin to put a reasonable lower limit on those costs, which gives us a defensible way to reallocate them.
Although the best strategies for reallocating those costs are still being debated, the benefits of distributed renewable generation are already very apparent—as is the feasibility. Data collected during the Pecan Street Project, funded by the U.S. Department of Energy, indicates that a house in Austin, Texas, outfitted with solar panels typically generates 4 or 5 kilowatts during the midday hours of a sunny day in summer, which exceeds the amount of power the home typically uses during such a period.
Whether or not rooftop solar makes sense for a particular homeowner, however, depends on the initial cost, maintenance costs, subsidies, the cost of grid power, and the selling price of the excess electricity generated.
The U.S. Department of Energy’s SunShot initiative has as its goal making solar power cost competitive—without subsidies—by 2030. (A Chinese government agency has a similar agenda.) Specifically, SunShot’s goal is to reduce the cost of distributed, residential solar power to 5 U.S. cents per kilowatt-hour by 2030; it costs about 18 cents today. Today, a 6-kW rooftop residential solar system in the United States typically costs between $15,000 and $20,000; the exact figure depends on where you live. According to data from the EIA, the average retail cost of electricity delivered by the grid in the United States is 12.5 cents per kilowatt-hour. So at 18 cents, rooftop-generated solar is not yet, on average, competitive with grid-delivered electricity. But many governments, for example U.S. state governments, subsidize the purchase of solar-power systems to make them competitive.
Meanwhile, many utilities are experimenting with alternative-ownership options. One is community solar, in which individual consumers buy a small number of panels in a relatively large, utility-scale system. They then get monthly credits for the electricity generated without having panels on their roofs. Another experiment, being run by CPS Energy, in San Antonio, uses rooftop solar, but CPS Energy owns the equipment and pays the homeowner for the use of the roof.
One challenge with distributed solar is storage. Most solar-panel owners are using the grid as the functional equivalent of storage: They sell excess power to the grid when they can and buy back from the grid to compensate for shortfalls. This is usually the simplest and cheapest way to even out differences in production and consumption. Nevertheless, many people—most notably, Elon Musk—are betting the economics will soon favor batteries. Musk’s electric-car company, Tesla, sells a battery for home use called Powerwall 2, which costs $5,500 and offers 14 kWh of storage, enough to run an average home overnight. However, adding the costs of battery storage to a solar installation to go off grid makes the costs of power significantly higher than those of ordinary electricity from the grid.
Comparing the options for expanding the use of solar power is not straightforward, however, because much depends on how the grid will evolve. For example, right now, the grid could not handle a changeover to 100 percent solar (even in areas where it would make sense, like the southwestern United States or the North African desert). The grid we have today was designed around sources whose output generally varies little from day to day. But the U.S. DOE, under its ENERGISE program, is striving to develop, by 2030, the control, protection, and other technologies needed to enable an entirely solar-powered grid.
The grid will evolve in other ways, too, and quickly. One of the most important trends, already well under way, is the increasing use of microgrids. A microgrid is a group of connected power sources and loads. It can be as small as an individual house (often dubbed a nanogrid) or as large as a military base or college campus. Microgrids can operate indefinitely on their own and can quickly isolate themselves if a disturbance destabilizes the larger grids to which they are normally connected.
This is an important feature during both natural and man-made disasters. Consider what happened when Hurricane Ike hit the Houston-Galveston area of Texas in 2008: Blackouts were widespread, but 95 percent of the outages were caused by damage to less than 5 percent of the grid. The grid effectively distributed the effects of what was only modest equipment damage.
This isolating capability of microgrids also promises enhanced cybersecurity. That’s because microgrids can help keep localized intrusions local, making the grid a much less appealing target for hackers.
When disaster strikes, whatever its cause, microgrids can limit the consequences. If it is not physically damaged, a microgrid can operate as long as it has access to a source of power, whether that’s natural gas, the sun, or wind.
In the long term, with the timing depending as much on economics and regulation as technology, it is quite possible that the grid will evolve into a series of adjoining microgrids. Utilities have proposed to build such microgrid “clusters” in, among other places, Chicago, Pittsburgh, and Taiwan, a tropical island where grids are prone to storm damage. These adjoining microgrids would share power with one another and with the legacy grid to minimize energy cost and to maximize availability.
In an era of adjoining microgrids that are privately owned and operated, what will become of the utility company? There are at least two possibilities. It might simply supply power to the microgrids that need it, rather than doing that for individual customers. Or it might manage microgrids and their connections with one another and to the legacy grid. Across the United States, the concept of a utility is already being reinvented in some places as more competition is introduced. Microgrids are going to accelerate that trend.
The spread of distributed generation and the rise of microgrids will also be shaped by two other factors: the expansion of the Internet of Things and the growing influence of big data.
The Internet of Things is a boon for distributed generation because it is giving rise to industries that are mass-producing sensors, microcontrollers, software, and other gear that will be easily and cheaply adaptable for use in future, data-driven grids and microgrids. How will these things be used? Imagine a residential solar-power system of the near future. It will have “customer equipment”—solar panels, a smart inverter, and a storage battery and systems to manage loads dynamically. From time to time, the power output of that installation will be lower than usual, because of, say, a heavily overcast day.
But it would be easy to design a control system, based on readily available IoT components, that could communicate with similar systems in surrounding houses. These systems would work together, for example, to turn air conditioners on or off ahead of or behind schedule, or alter their thermostats by half a degree, to accommodate intermittent, unexpected shortfalls in capacity. What would enable this plan to work is the fact that most modern homes are well insulated, so it takes time before the internal temperature changes enough to trigger the HVAC system. The reason why homes would be grouped together in this scheme is that it would make the task easier: In the group, some homeowners would be willing to sacrifice a lot of comfort, some less. But the power needs of the group of houses would be relatively predictable and manageable, from the utility’s standpoint.
Most consumers do not want to make frequent and detailed decisions on energy use. So imagine a device—let’s call it an energy thermostat—that permits you to set a range of comfortable temperatures, rather than entering a single one. The wider you set the range, the less you’ll pay for power. The grid or microgrid operator would use the range—yours and everybody else’s—to dynamically match supply and demand on a minute-by-minute basis. On a hot afternoon, with demand at its peak, the temperature in your home would be at the top of the range.
Electric utilities will also begin making greater and much more effective use of big data. Utilities have been using data since the very beginning: When Thomas Edison opened the Pearl Street power station, in New York City in 1882, it had indicator lights to show when the load had increased or decreased enough to warrant adjustments to the dynamo producing the DC power. But that system clearly was not scalable. If a utility had to readjust its generators every time a customer came online, the industry would have died out long ago.
Having a large number of loads makes the aggregate demand predictable—and manageable. This happy condition obviously depends on there being little correlation of usage from house to house and business to business. But just suppose that at 3:00 p.m. on a hot summer day, everyone in a medium-size city turned off their air conditioners at the very same second, waited 15 minutes, and then turned them all back on again at exactly the same time. That would almost certainly cause a massive blackout.
With big-data tools, it may no longer be necessary to depend on consumers’ actions being only loosely similar. It should be possible to understand how to adjust production and consumption to enhance system behavior. For example, with the energy-thermostat concept outlined above, the system operator needs to have not only the appropriate controllers but also access to real-time data to determine the risk of system failure when load-management actions are taken.
Utilities in many areas have embarked on this path using various customer incentives to permit, say, time-of-day pricing or some other form of load management by the utility rather than by the consumer. But we are now taking just baby steps. Big-data tools will soon let us take larger strides and may well one day let us run. It may be possible to use real-time operational data to optimize the performance of large sections of the grid and to predict future performance.
Although my main goal is to describe a hopeful vision that many of us in the utility business have for the electric grid, I would be remiss if I did not point out some of the challenges. These include financial ones, regulatory ones, and technical ones. And they come in all shapes and sizes.
One of the most fundamental is slow growth. To pay for costly system upgrades, utilities in the past would have relied heavily on growth in demand, and therefore sales. But improvements in efficiency, which consumers seek (and rightly so), have slowed growth in demand to the extent that it is now increasing at a rate lower than that of the growth in gross domestic product. And the figures are sobering: In 2014, the U.S. DOE predicted that in the period from 2012 to 2040, the demand for electricity will grow by only 0.9 percent per year. So, utilities cannot expect to fund the required system changes in the same ways as they have in the past, through growth.
Other shifts in the industry will only exacerbate these money woes. For example, in the past utilities could count on key pieces of equipment lasting a long time. But smart grids depend on electronic components, such as smart meters, controlled by software, which have shorter lifetimes and require much more frequent upgrades.
The biggest unknown is how swiftly the regulatory process can adapt. If it can’t move quickly enough to keep up with the technology, expect agonizingly slow change. And what if governments try to prop up outmoded technologies with subsidies? That could drag out the process further. On the other hand, some would argue that regulators should slow the rate of change. Though the arguments for that are worthy of political discussion, I’m certainly not in that camp.
Historically, regulations have been driven mainly by legal and economic considerations rather than by technical ones. But now, with the pace of technology outrunning other factors, regulators in the United States and Europe are reacting to this new state of affairs in many different ways. My view is that the staffing of regulatory agencies will need to become more technically savvy if we are to navigate these turbulent waters while continuing to provide electric power with the lowest cost and highest reliability.
I’m confident that in the end, we’ll have electrical grids that are less costly, more sustainable, and more user friendly than the ones that came before. The United States’ National Academy of Engineering recently selected electrification as the top engineering accomplishment of the 20th century. But electrification now needs to be reengineered to meet the needs and opportunities of the 21st century. This is our chance to show that we are as good as our forebears of two, three, or four generations ago at technology, regulation, public policy, finance, and the management of change in general. And to leave to posterity a legacy as fine and enduring as the one that was left to us.
About the Author
Robert Hebner is the director of the Center for Electromechanics at the University of Texas at Austin.