At NZero, we believe the right data and superior methodologies can free us to think and act boldly on climate. Even amidst dire forecasts, we, as a species, have been slow to act and incomplete in our approach. Too long we have lacked the data, the methods, and the tooling to accurately and practically inform complex decarbonization planning and the capital deployment. We rely too often on coarse heuristics and generalized approaches, which are often manual, time-consuming and expensive to execute.
Today, NZero’s technology provides answers to questions we once thought impossible to ask about our vast and complex system of energy and consumption. Now, we have the tools to account for it, the tools to make complexity legible and actionable.
Better data is just the beginning.
Over the past 20 years, a vast array of data producing sensors and advanced energy meters have proliferated throughout buildings, vehicles and all manner of business processes and operations. This includes things like IOT for use in building management systems, and smart meters and equipment sub-meters for improved visibility and fault detection. While many use cases have been developed for this enhanced data, it has not been effectively leveraged to drive decarbonization analysis and planning. Good data, however necessary, is not sufficient for improving outcomes on its own.
NZero is adept at capturing, processing, aggregating and enriching the wide variety of sensor and measurement data available. But, critically, we have developed a series of novel methodologies and analytical tools and techniques that leverage this granular data through AI and machine learning systems to provide useful answers to our customers’ real and immediate needs. Data provides the promise of more informed and more effective actions, but ultimately requires new methods and tools for deployment.
A new paradigm for decarbonization planning.
Decarbonization planning is the process of identifying, validating, estimating, and ranking a large number of possible emissions reduction alternatives for a project, portfolio or public entity. These plans often dictate how limited resources are applied to reducing carbon emissions, and how quickly they’re enacted. Therefore, any improvements in the planning and assessment process can directly lead to additional emissions reductions and increase the rate of decarbonization.
Decarbonization planning is constrained by its inherent complexity as well as a reliance on manual processes and a constrained expert knowledge base. The result is a slow, expensive procedure which often yields suboptimal results. It has traded accuracy for tractability, employing coarse heuristics and rough prioritization constraints in order to reduce problems to a solvable level of complexity. For instance, calculations of intervention effects rely on the use of overly generalized and inaccurate estimates of reductions, and are often applied indiscriminately across an entire portfolio (e.g. replacing all heating elements across an entire portfolio with heat pumps without data demonstrating the relative efficacy of this intervention across individual locations). Our goal is not to criticize the talented and dedicated professionals in the sustainability industry, but rather to highlight the challenges they face on a daily basis – challenges that cannot be solved without better tools. We create these tools, along with systems that augment practitioners, accelerating and enhancing their work, amplifying the effectiveness of their efforts.
We rethought the existing decarbonization process end-to-end, working backwards from the goal of identifying an optimal set of actions to accomplish the objectives of emissions and cost reduction, within the limits of resource constraints. This is a classic optimization problem at its core, which has a long history from operations research, and is commonly used across a variety of industries in many contexts.
We have found that the challenge in this case is not with the optimization algorithms themselves, but with a lack of accurate understanding of the outcomes from a sufficient set of candidate possibilities to optimize against. The exploration and search of possible options for decarbonization, combined with an ability to accurately forecast their effects are the most critical parts of the problem, but have been too laborious and time-intensive to be widely practiced.
We solved this search problem through a combination of improved access to asset measurements, and advanced scenario modeling and effect forecasting. The result is an approach which can rapidly and exhaustively explore large numbers of possible interventions while considering their interdependencies. For example, the NZero advanced building retrofit tool is designed to enhance the precision and effectiveness of retrofit planning, a critical component in the journey towards decarbonization. Our approach marries the accuracy of first-principles physics-based simulations with the efficiency of data-driven and heuristic methodologies. This hybrid strategy aims to add both speed and scale to the process of retrofit simulation. By integrating these methods, we can rapidly analyze and predict the impacts of various retrofit options on a building's energy usage and environment. The result is a system which computes a large number of decarbonization scenarios, exploring the entirety of the search space of possibilities. It does this quickly and scalably, and provides accessible modeling based on historical measurement data and requiring minimal building metadata to get started.
Building for speed and scale.
These capabilities are built upon a foundation of data acquisition, building energy and emissions modeling, and machine learning models for complex retrofit and intervention effect inference. We made the decision to base our models on building-level, hourly measurements. While there are large amounts of IOT and deeply instrumented buildings in existence, they still represent a minority of the built environment. With utility AMI (Advanced Metering Infrastructure) continuing to expand (72% in 2022), we found through experimentation that using the hourly data available from AMI captures a balance between accuracy of building modeling and widespread data availability. AMI meter data is readily available through utility systems, and it can ultimately be obtained at scale, or simulated when necessary. We utilize this richer data to train improved building energy predictive models, as well as serve as the basis for our retrofit forecasts through trace-based simulations.
In order to accurately and cheaply (to configure and to compute) forecast the effects of individual interventions and retrofits, or combinations thereof, we utilize real meter trace data from historical measurements. This trace-based simulation approach allows us to tailor potential retrofit or energy choices to a building's specific usage patterns and environmental conditions. Additionally, we developed our own ML-based building energy models, trained on historical meter data along with other critical factors such as temperature and weather. We leverage parameters derived from our sophisticated data-driven building models, such as thermostat set points and heating/cooling sensitivities, combined with supplement ML models to predict the effects of building retrofits with precision.
Our current trace-driven simulation functions at hourly granularity. For each hour, we simulate the effects of configured asset changes within that timeframe to ascertain alterations in energy or emissions sources, as well as overall energy usage. This level of detail in the simulation process enables us to facilitate detailed calculations for energy transitions like fuel switching, such as the electrification of heating systems or the implementation of distributed energy generation.
For instance, we can calculate the original heat output from meter readings and then estimate the energy required to achieve the same heat output using an electric heat pump. This process incorporates the amount of heat generated by existing combustion, the heat pump's coefficient of performance, the temperature, and the electric grid’s emission intensity factor, all at an hourly level. Simulating through at this granularity allows all of these dynamic factors to coalesce, and to accurately forecast the retrofit's impact across the multiple dimensions of usage, cost and emissions.
The figure below displays the daily emissions for a building in Northern Nevada, comparing the original total emissions from electricity and natural gas (orange) with the forecasted emissions of an air source heat pump (green) for the first half of 2023. The magnitude of the emissions savings ratio for this heat pump is shown in the upper figure, and varies based on the changing carbon intensity of the Nevada grid – which is lower in the spring than the winter due to large on-grid solar activity – as well as the equipment efficiency, which can drop nearly 40% because of local winter weather patterns. We see in this case, emissions savings vary from roughly 2x savings in January to greater than 4x savings in the Spring. The key here is to highlight how savings are dependent on a site's environment, electrical grid, equipment behavior and load patterns, and can cause outcomes to vary significantly from location to location. Generalized heuristics or percentage reductions are simply not suitable for accurate forecasting.