APPBUDDY BLOG

REQUEST A DEMO

Breakdowns in the Electric Grid and Enterprise Data Management

Posted by Marc Aubin on Oct 2, 2016 5:20:15 PM
Marc Aubin
| Share

In our last post, we outlined the state of the technology of the electric grid and most relational database management systems. In both cases, the technology is arguably old. In this post, we will show why that’s a problem in today’s world.


Predictability Dependence 

When considering how well a technology holds up over time, something that is informative to consider is how well a system handles variability, particularly of inputs and outputs, supply and demand.

Variation and Unpredictability Are the Electric Grid’s Enemies 

The electric grid wants electricity production and consumption to be predictable. Why? Because production must equal consumption, all the time! There are virtually no viable mechanisms for storing electric power at scale. Electricity itself cannot be stored. It has to be stored as some kind of potential energy that can be turned into kinetic energy that can spin electricity generating turbines when needed. While there are some instances of grid-scale storage, like molten salt towers in the desert Southwest that convert molten salt to heat that generates energy, or caverns in Alabama where compressed air is stored to do the same thing, you don’t see many molten salt towers or underground air caverns in New York City.

What about batteries? This is a promising, hot new area (as we will see in a later post). But again, there are very few grid scale solutions. There is a battery tower that looks like an office building in Southern California that can power a small city for up to 4 hours. But that’s about it.

In sum, our current situation and the situation we’ve been in for years is that we either use all the electricity we generate, or it cascades out there on the wires where it can wreck havoc. Indeed it has in the past. Here is a picture of what electricity consumption in the U.S. and Canada normally looks like:

NA_Electricity_Image.png

Here is what it looked like in the blackout of 2003. 

NA_Blackout_Image.png

Source: http://www.am980.ca/

This blackout was caused by 3 downed power lines and a computer bug in Ohio. The electricity didn’t know where to go so it caused surges throughout other Northeastern authorities in the U.S. and Canada, forcing them all to shut down.

A big reason why this has come to a head over recent decades is production and consumption are more variable and unpredictable than ever before. Take new production from increases in solar and wind generation. By law, the utilities “must” buy that power, and it must flow into the grid. So what we end up with, sometimes on a daily basis, are spikes in production (e.g., too much wind on stormy days), or in consumption (e.g., more people throwing on their air conditioners because of some of the hottest summers on record).

As we can see, this “balanced production and consumption rule” that the grid must live by causes a whole bunch of other problems that send balancing authorities into a tissy in order to avoid blackouts. It also often gets them to make inefficient choices, like firing up a diesel generator at 2 percent efficiency in order to provide electricity at peak times when the sun goes down and there’s no solar production.

In sum, the grid has terrible and antiquated ways to handle variable production.

Data Flows Need to Be Predictable

Data flows in enterprise database management systems, like the one Salesforce is based on, also need to be predictable. Data persistence is defined by data structures (e.g., the data model and field data types). These structures dictate what type of data goes where. Data flows are governed through rules (e.g., database constraints, validation rules, application logic like workflows and triggers and database/application level security). A big reason why these rules and the structures of persistence are there is data integrity, or put in layperson’s terms, to make sure the right data goes in and the wrong data stays out.

These rules are totally justified. After all, our data is our business’s most important asset. However, rules are made to be broken, and structures that are inflexible break down over time.

We’ve all heard about the explosion of data over the past decade and a half. A natural corollary to this data explosion is an explosion of the inherent interconnectedness of data which leads to new insights on the business. It’s why the “relational” in relational databases is so important.

There’s a few catches though: new insights on the business means demands for processes to change. Sometimes a lot, and quickly. Adding to this pressure is the fact that the users of these systems have increased significantly since the days where there were a few administration people maintaining the data (yep, that’s how it used to be). The needs and demand for data today’s users have are less predictable and vary more (e.g., by their function in the business like manager or rep, by the geography where they are located, and by the stage of the business process they need to conduct).

When ops needs to build a new business process, it’s not easy to see how new data should be captured up front, and it’s not always as easy as adding 1-2 fields to the opportunity object. Instead it calls for ensuring that data is captured in a way that preserves data integrity, and this often means creating multi-object data models according to third normal form modeling principles. It also means potentially figuring out how data from other systems is going to be affected or how it needs to be incorporated into the business process.

So a lot of times, this means users have to wait for the functionality they need because it takes a long time to architect and build, sometimes a long time. But the show must go on, and this means users break the rules by resorting to workarounds. It’s part of the reason why you see such poor adoption of core systems today, poor quality of data in those systems, and why so many data silos and rogue Excel spreadsheets have sprung up.

Centralization in the Electric Industry is Outdated

Another factor contributing to the demise of the American grid is that it is a highly centralized system of production and distribution.

The entire country is divided up into 3 major power authorities, and each is comprised of huge interconnected grids. The power utilities are the ones largely responsible for generating power at big power plants, they control all the transmission infrastructure, and they balance the grid. There are literally big command centers like we see in the movie War Games where grid scale distribution decisions are made day by day.

utility_command_center.jpg

Source: Boston.com

In the old days centralization meant ease of governance. It also means that power generated 1,000 miles away can instantaneously show up at our light bulbs. However, that same interconnectedness means widespread failure if one part of the system goes down (again, look at that striking picture depicting the 2003 blackout above).

The Downward Spiral of Centralized Data Management Systems

Salesforce and other core systems have centrally stored data and centrally administered access control mechanisms. In the past the administration has usually been conducted by IT, but now it has grown to include other departments such as Sales Operations and COEs, but these are still relatively centralized units.

Big complex centrally administered systems are slower to change, and as we discussed earlier, this means users are more prone to conducting workarounds, or just not following cookie cutter processes that are too onerous for them. Less adoption of core systems for conducting business processes means people don’t keep them up their data the way they need to, and then data quality gets worse, which means reports don’t show the true state of the business. When both management and the field can’t see the true state of the business, a lack of trust in these systems builds, and this leads even further to a downward spiral of poor user adoption.

There is a term in the electricity industry called the “utility death spiral” which refers to the phenomenon where the utility companies’ business models are failing so badly (another topic altogether) that they can’t upkeep the grid, and this further exacerbates the problems they have, undermining their ability to serve their mission. The downward spiral of poor user adoption could also be characterized as a fundamental flaw of enterprise systems when it occurs. Similar to the electric industry, this downward spiral is a real threat that undermines the core value these systems bring to the table -- to serve as the single source of truth and to ensure the customer life cycle runs smoothly and efficiently.

Our systems are undoubtedly under pressure, and we, the consumers of power and the users of data management systems, are putting the pressure on.

So what do we do about it? In our next few posts, we’ll explain that the solution is tangible, even though it sits at that scary place called “the last mile”.

Read the next post, Solutions at the Last Mile of Data Management.


Subscribe To The AppBuddy Blog

Or Leave A Comment