|
Evaluation of the United States Environmental Protection Agency's Energy Star Labeling Program and Reported Energy Savings
|
| Presenter(s):
|
| Jerri Dorsey, United States Environmental Protection Agency, dorsey.jerri@epa.gov
|
| Gabrielle Fekete, United States Environmental Protection Agency, fekete.gabrielle@epa.gov
|
| Abstract:
The U.S. Environmental Protection Agency's (EPA) ENERGY STAR program is a voluntary energy efficiency program. In 2006, EPA reported that using ENERGY STAR products prevented greenhouse gas emissions equivalent to those from 23 million vehicles, and saved Americans $12 billion on their utility bills. The EPA's Office of Inspector General (OIG) evaluated both how effectively EPA is managing the ENERGY STAR Product Labeling Program, and the validity and accuracy of the overall programs reported energy savings. The OIG found EPA lacks reasonable assurance that the ENERGY STAR self-certification process is effective. There is little oversight in using the program's label in retail stores. Using the label on products that do not meet ENERGY STAR requirements may weaken the label's value and negatively impact the program. The OIG found the program's savings claims were inaccurate. Deficiencies included the lack of a data collection quality review; and reliance on unverified estimates, forecasting, and third-party reporting.
|
|
Counting on Market Intelligence: When the Experts are Wrong
|
| Presenter(s):
|
| Anne West, Cadmus Group Inc, anne.west@cadmusgroup.com
|
| Ben Bronfman, Cadmus Group Inc, ben.bronfman@cadmusgroup.com
|
| Shahana Samiullah, Southern California Edison, shahana.samiullah@sce.com
|
| Abstract:
Can we believe market experts when structuring energy efficiency program evaluations? How does it impact evaluation efforts if industry experts are wrong about their target market? In actual practice, industry market actors may not really possess the market intelligence necessary to understand the market and true potential of their program offering. Market actors, who should have the intelligence, have been wrong. Proposers may be fascinated with their solution to a technical problem, but may be weak on market characterization and did not find the right business model to showcase their technology. Evaluating programs that rely on industry experts' market assessments requires an evaluation design that can unmask misunderstandings of the situation and point out mistakes to prevent similar mistakes. This paper explores several programs where experts did not fully understand their market, the impact on evaluation efforts, and, suggests an evaluation designed to identify errors early in the program cycle.
|
| |