[ieee 2014 ieee 40th photovoltaic specialists conference (pvsc) - denver, co, usa...

6
PV System Energy Test Sarah Kurtz, a Pramod Krishnani, b Janine Freeman, a Robert Flottemesch, c Evan Riley, d Tim Dierauf, e Jeff Newmiller, f Lauren Ngan, g Dirk Jordan, a and Adrianne Kimber h a National Renewable Energy Laboratory, Golden, CO, USA; b Belectric, Newark, CA, USA; c Constellation, Baltimore, MD, USA; d Black & Veatch Energy, San Francisco, CA, USA; e SunPower Corporation, San Jose, CA, USA; f DNV GL, San Ramon, CA, g First Solar, San Francisco, CA; h Incident Power Consulting, Oakland, CA, USA Abstract The performance of a photovoltaic (PV) system depends on the weather, seasonal effects, and other intermittent issues. Demonstrating that a PV system is performing as pre- dicted requires verifying that the system functions correctly under the full range of conditions relevant to the deployment site. This paper discusses a proposed energy test that applies to any model and explores the effects of the differences between histori- cal and measured weather data and how the weather and system performance are intertwined in subtle ways. Implementation of the Energy Test in a case study concludes that test uncertainty could be reduced by separating the energy production model from the model used to transpose historical horizontal irradiance data to the relevant plane. Index Terms — photovoltaic systems, model verification, photovoltaic performance, energy yield, performance guarantee. I. INTRODUCTION Accurate prediction and verification of photovoltaic (PV) plant output is essential for improving predictive models and for proving a performance guarantee [1]. Reducing the uncer- tainty of the verification is beneficial to all parties. Variable weather conditions complicate verification of PV plant performance and require revision of projected energy yield based on differences between the historical weather data and weather observed during the test. A test that extends through an entire year is beneficial be- cause it can identify seasonal issues such as wintertime row- to-row shading or summertime shading from local trees. If a model has a high accuracy at all times of year, then the test time can be reduced accordingly. But the purpose of an energy test (which measures the integrated output over time), rather than a capacity test (which measures the power generated under specific conditions), is to confirm performance over all conditions. An energy test is meant to define whether the measured sys- tem performance agrees with what is expected from the in- stalled equipment, as computed with the model. However, because local weather strongly affects the generated electric- ity, it is useful to differentiate the system being tested from external factors, namely the weather. A well-designed test defines a test boundary that isolates the system being characterized (the system boundary). The system output is defined by identifying the production meter. Identifying the locations and types of sensors for characterizing the weather defines the test boundary. For example, measuring module (instead of ambient) temperature implies that overheating of the modules due to poor air circulation will be interpreted as hot weather rather than as a poorly performing system. The final test result may depend on the choice of the test boundary. Two examples of test boundaries are shown in Fig. 1. The choices shown in Fig. 1a are consistent with the data available in historical weather files. The choices shown in Fig. 1b raise two concerns: 1) these data are not typically available from historical weather data, and 2) poor system installation may not be detected by the test. Fig. 1a. Test boundary when global horizontal irradiance, ambient temperature, and wind speed are measured. Fig. 1b. Test boundary when plane-of-array irradiance and module temperature are measured. It is the purpose of this paper to discuss the subtleties of an energy test and to provide insight into its use. First, we define terminology for discussing the models, the proposed Energy Test, why differences between historical weather data and measured weather data can confuse the test, and how each of these affects the test boundary discussed above. The Test is applied as a case study. The lessons learned from the case study are described related to 1) inconsistencies in the histori- 978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0879

Upload: adrianne

Post on 27-Mar-2017

220 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

PV System Energy Test Sarah Kurtz,a Pramod Krishnani,b Janine Freeman,a Robert Flottemesch,c Evan Riley,d Tim Dierauf,e Jeff

Newmiller,f Lauren Ngan,g Dirk Jordan,a and Adrianne Kimberh aNational Renewable Energy Laboratory, Golden, CO, USA; bBelectric, Newark, CA, USA; cConstellation, Baltimore, MD, USA; dBlack & Veatch Energy, San Francisco, CA, USA;

eSunPower Corporation, San Jose, CA, USA; fDNV GL, San Ramon, CA, gFirst Solar, San Francisco, CA; hIncident Power Consulting, Oakland, CA, USA

Abstract — The performance of a photovoltaic (PV) system

depends on the weather, seasonal effects, and other intermittent issues. Demonstrating that a PV system is performing as pre-dicted requires verifying that the system functions correctly under the full range of conditions relevant to the deployment site. This paper discusses a proposed energy test that applies to any model and explores the effects of the differences between histori-cal and measured weather data and how the weather and system performance are intertwined in subtle ways. Implementation of the Energy Test in a case study concludes that test uncertainty could be reduced by separating the energy production model from the model used to transpose historical horizontal irradiance data to the relevant plane. Index Terms — photovoltaic systems, model verification,

photovoltaic performance, energy yield, performance guarantee.

I. INTRODUCTION

Accurate prediction and verification of photovoltaic (PV) plant output is essential for improving predictive models and for proving a performance guarantee [1]. Reducing the uncer-tainty of the verification is beneficial to all parties.

Variable weather conditions complicate verification of PV plant performance and require revision of projected energy yield based on differences between the historical weather data and weather observed during the test.

A test that extends through an entire year is beneficial be-cause it can identify seasonal issues such as wintertime row-to-row shading or summertime shading from local trees. If a model has a high accuracy at all times of year, then the test time can be reduced accordingly. But the purpose of an energy test (which measures the integrated output over time), rather than a capacity test (which measures the power generated under specific conditions), is to confirm performance over all conditions.

An energy test is meant to define whether the measured sys-tem performance agrees with what is expected from the in-stalled equipment, as computed with the model. However, because local weather strongly affects the generated electric-ity, it is useful to differentiate the system being tested from external factors, namely the weather. A well-designed test defines a test boundary that isolates the system being characterized (the system boundary). The system output is defined by identifying the production meter. Identifying the locations and types of sensors for characterizing the weather

defines the test boundary. For example, measuring module (instead of ambient) temperature implies that overheating of the modules due to poor air circulation will be interpreted as hot weather rather than as a poorly performing system.

The final test result may depend on the choice of the test boundary. Two examples of test boundaries are shown in Fig. 1. The choices shown in Fig. 1a are consistent with the data available in historical weather files. The choices shown in Fig. 1b raise two concerns: 1) these data are not typically available from historical weather data, and 2) poor system installation may not be detected by the test.

Fig. 1a. Test boundary when global horizontal irradiance, ambient temperature, and wind speed are measured.

Fig. 1b. Test boundary when plane-of-array irradiance and module temperature are measured.

It is the purpose of this paper to discuss the subtleties of an energy test and to provide insight into its use. First, we define terminology for discussing the models, the proposed Energy Test, why differences between historical weather data and measured weather data can confuse the test, and how each of these affects the test boundary discussed above. The Test is applied as a case study. The lessons learned from the case study are described related to 1) inconsistencies in the histori-

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0879

Page 2: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

cal and measured weather data, 2) systematic errors introduced by excluding questionable data, and 3) valuable datascreening techniques. Finally, some key challenges in accurately applying the test are summarized and strategies for reducing test uncertainty are presented.

II. DEFINING THE ENERGY TEST TERMINOLOGY

An accurate prediction of system energy output requires an energy production model [2-7] ofirradiance, temperature, shading, system designenvironment. The terms used for describing the energy study are defined in Fig. 2 [8]. These definitions provide an unambiguous way to differentiate the predictedon historical weather data) from the expected energy (based on the actual weather data during the test), as shown in Fig.

Fig. 2. Definition of predicted, expected, and measured

To be consistent, the predicted and expectedbe calculated using the same model. If the differs from the predicted energy, this difference is considered to be outside of the test boundary and is not relevant to the test result (which compares the expected and measuredThe expected energy differs from the predictedcause the plant rarely operates under measured weather that is identical to historical data.

III. RAMIFICATIONS OF HISTORICAL WEATHER

The power generation of a PV system is most directly linked to the plane-of-array (POA) irradiance, but historical weather data is most conveniently supplied for the plane and transposed to the POA using a transposition and decomposition model (Fig. 3). A choice is made to use global horizontal irradiance (GHI) (consistent with hdata) or POA irradiance (more closely predicting the PV system output) in the energy-production model.

Multiple transposition models have been developed to correct for the variation in diffuse light that strikes different sur

cal and measured weather data, 2) systematic errors intro-duced by excluding questionable data, and 3) valuable data-

techniques. Finally, some key challenges in accu-rately applying the test are summarized and strategies for

ERMINOLOGY

output [1] typically of the effects of

irradiance, temperature, shading, system design, and the local used for describing the energy in this

These definitions provide an predicted energy (based

energy (based on as shown in Fig. 2.

measured energies.

expected outputs must If the expected energy

energy, this difference is considered to be outside of the test boundary and is not relevant to the test

measured energies.) predicted energy be-

cause the plant rarely operates under measured weather that is

EATHER DATA FORMAT

The power generation of a PV system is most directly ) irradiance, but historical

weather data is most conveniently supplied for the horizontal plane and transposed to the POA using a transposition and decomposition model (Fig. 3). A choice is made to use global horizontal irradiance (GHI) (consistent with historical weather data) or POA irradiance (more closely predicting the PV sys-

Multiple transposition models have been developed to cor-

rect for the variation in diffuse light that strikes different sur-

faces depending on the weather and other conditions GHI is measured, an important part of the PV permodel is the choice of the transposition and decomposition model. If POA irradiance is measured, then no transposition or decomposition model is needed, although the sensor azimuthaland tilt orientations must be verified.

Studies have often found that inaccuracies in the transposition model are the largest source of bias error in energyproduction models based on GHI (referred to here as “GHIbased models”). To avoid this bias errormodel based on POA irradiance (referred based models”) may be used, but then historical weather files are usually not available. Transposition to the POA for a tracking system is complicated by the tolerances in the tracking system and is not addressed here.

Fig. 3. Drawing to show how an energyon global horizontal irradiance (GHI) requires two steps: 1) a transposition model to convert GHI to planeance, and 2) the model predicting PV output from POA irradiance. If only GHI data are available, a decomposition model is also used, adding uncertainty.

To be consistent, the predicted and be calculated using the same GHI-based model. If POA irradiance is measured, then a POA-based model may be used, but this breaks the direct link between the outputs. If both GHI and POA irradiance are measured, then the transposition model can be verified independently from the PV performance (POA-based model).

If the test boundary causes measurement of found in the historical weather files (such as module temperature and POA irradiance), then the testfound to penetrate the intended system boundary, as shown in Fig. 1b. In this case, some attributes of system performancemay fall outside of the test, as described in Table 1.

IV. INTRODUCTION TO T

The Test Method has been described in detail elsewhere and is being further developed by the nical Commission (IEC) as a draft standard model and weather data used to generate the are carefully documented. The measured data are screened for anomalies according to recommended guidelines, including exclusion of data periods for which more than 10% of the production or irradiance data are missing or erroneous. Excluded or corrected data must be discussed and agreed upon.

ending on the weather and other conditions [9]. If GHI is measured, an important part of the PV performance model is the choice of the transposition and decomposition model. If POA irradiance is measured, then no transposition or

though the sensor azimuthal must be verified.

d that inaccuracies in the transposi-tion model are the largest source of bias error in energy-

tion models based on GHI (referred to here as “GHI-based models”). To avoid this bias error, an energy-production model based on POA irradiance (referred to here as “POA-based models”) may be used, but then historical weather files are usually not available. Transposition to the POA for a

ing system is complicated by the tolerances in the track-ing system and is not addressed here.

to show how an energy-production model based

on global horizontal irradiance (GHI) requires two steps: 1) a tion model to convert GHI to plane-of-array (POA) irradi-

and 2) the model predicting PV output from POA irradiance. If are available, a decomposition model is also used,

and expected outputs must based model. If POA irradi-

based model may be used, but reaks the direct link between the predicted and expected

outputs. If both GHI and POA irradiance are measured, then the transposition model can be verified independently from the

based model). If the test boundary causes measurement of weather data not

found in the historical weather files (such as module tempera-ture and POA irradiance), then the test boundary may be found to penetrate the intended system boundary, as shown in Fig. 1b. In this case, some attributes of system performance

as described in Table 1.

TEST METHOD

The Test Method has been described in detail elsewhere and is being further developed by the International Electrotech-

as a draft standard [10]. In short, the model and weather data used to generate the predicted energy are carefully documented. The measured data are screened for anomalies according to recommended guidelines, including exclusion of data periods for which more than 10% of the

tion or irradiance data are missing or erroneous. Ex-ed data must be discussed and agreed upon.

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0880

Page 3: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

TABLE I EXAMPLES OF CHOICES THAT CAN MIX SYSTEM PERFORMANCE WITH WEATHER BY PLACING SOME ASPECTS OF SYSTEM

PERFORMANCE OUTSIDE OF THE TEST BOUNDARY (SEE FIG. 3) Measurement penetrating

system boundary System installation

detail System performance Associated uncertainty for

one-year evaluation

Module temperature Circulation of air around modules

Module temperature affects system efficiency

Module operating temperature increase of 5°C leads to 0.5%–1.5% loss

POA irradiance Inaccurate positioning of sensor

Available irradiance may be erroneously measured

An error of 10° azimuth in the positioning of the POA sensor may cause ~1% error

POA irradiance Ground albedo Ground reflection affects irradiance reaching array

A change in albedo of 0.2 causes < 1% change for tilt of 30°

The modeling is repeated using the measured meteorological data to create the expected output. Then the measured data are compared with the expected data.

High-quality data are of utmost importance for any perfor-mance test. Key elements include accurate sensor calibrations, consistency of data logging, prompt response to equipment (sensor or data logger) malfunction, and frequent cleaning of irradiance sensors. Missing or inaccurate data will compro-mise the quality of any performance test.

V. CASE STUDY

The methods described above were implemented in a case study to verify that multiple analysts can apply the method to the same data set with consistent results and to identify opportunities for improving the method. A year of 15-min data from a large fixed-tilt PV system in California was chosen. Data from each of two meteorological stations included GHI from both a thermopile and a reference cell, ambient and module temperature, and average and maximum wind speed. The electrical data were directly from the meter at the point of grid connection as well as utility-grade meter data from each individual inverter. Anomalies were added to the data to increase the probability of seeing differences in the evaluations by different individuals. The party supplying the data also supplied a PVsyst [2] model for the electricity production. The model documentation included the hardware and meteorological input files, a listing of the settings in PVsyst, the report generated by the PVsyst simulation, and a complete listing of the results for the 8,760 hours in the year. Based on this documentation, the modeled results could be duplicated within 0.1%. The use of 15-min data resulted in exclusion of an entire hour whenever one data point of the 15-min data set was missing or flagged for exclusion because of the >90% screening criterion. Eight individuals evaluated the data; four completed the full analysis from start to finish, including the PVsyst modeling.

The results of the case study were evaluated to compare the experience of each analyst according to:

• methodologies used • data anomalies identified and how they were addressed • filtered data (aggregated measured generated electricity

and irradiance after filtering and addressing anomalies) • end result in terms of the PVsyst output.

The first comparison of results showed differences of between 0.1% and 8%, with the inclusion or exclusion of system outages being the primary variable. No information had been provided about the cause of system outages, so some analysts included data for outages. Others excluded the data, providing an analysis consistent with model verification. Part of the Test Method is a discussion of which points should be included or excluded. When the handling of data was agreed upon, the results converged within 0.1%.

VI. LESSONS LEARNED FROM CASE STUDY

A. Alignment of meteorological files

As part of the case study evaluation, it was noted that the original predicted energy was calculated using a TMY3 file, which contains more information than the meteorological files created from the measured data. If PVsyst has only GHI data, it estimates the horizontal diffuse irradiance (DHI) from the clearness index. To quantify the effect of using the measured or estimated DHI, the TMY3 data were extracted in different ways and the PVsyst model was rerun. The annual predicted energies are summarized in Table 2 for calculations using direct-normal irradiance (DNI) or DHI data with the GHI data. The largest difference (0.88%) was found between using the entire TMY3 data set and using only the GHI data from the TMY3 file. This uncertainty appears in the predicted energy rather than in the expected energy. So, this difference would not affect the outcome of a performance guarantee, which compares the measured energy to the expected energy. Nevertheless, it raises the question of whether we require that the same transposition model be used for both the predicted and expected energies in order to be consistent when using the added data would provide more accurate information on which to base a financial decision.

TABLE II

COMPARISON OF PREDICTED ENERGY FROM TMY3 DATA Data Used Annual Predicted Energy (MWh)

All TMY3 data 1,933 GHI and DNI 1,940 GHI and DHI 1,945

GHI only 1,950

Even though the philosophy of consistency between the modeling of the original predicted energy (based on historical

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0881

Page 4: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

weather data) and the expected energy (based on measured data) requires that these are treated identically, the predicted energy may be more accurate if additional information is used.

B. Treatment of erroneous data

The draft Test Method required that the entire hour of data be treated as missing data if >10% of the data were missing or erroneous. The case study data set showed spurious data when the system was starting up. A decision was made to exclude the spurious data rather than setting those values to zero. The combination of the requirement of 90% of the data for each hour and excluding a point whenever the system starts up leads to systematic exclusion of most short system outages. To demonstrate the potential impact, we give an extreme scenario in which a system trips off every day, but the maintenance crew resets it within the hour. The actual loss in production could approach 10% (depending on the hour when it trips off and the fraction of the hour when it is off); but if these time periods were excluded because of the start-up transient, the analysis would systematically neglect this loss in production. Because the transient data at start up or shut down systematically occur near outages within the data set, the result is especially sensitive to treatment of these data.

The draft Test Method allows replacement of missing data with modeled data rather than setting all numbers to zero, but this suggestion was not directly discussed as a strategy to “fix” hours with >10% missing data. The implication was that entire hours of data might be replaced with the modeled data. However, as discussed above, for the case of hours with partial data, there is a clear benefit to replacing the rejected data with data from the model and then retaining the remaining data points for the rest of the hour—rather than throwing out the entire hour, which tends to throw out the outages preferentially, introducing bias into the test result.

C. Systematic data verification methodologies

The analysts in the study used a wide range of techniques for identifying potentially erroneous data. Most analysts used flags to keep track of the suspicious data rather than simply

changing the data directly. The flagged data were then easily summarized for discussion.

Those analysts relying on visual inspection alone sometimes missed erroneous data. Table III summarizes a systematic filtering methodology that was quite successful.

Additionally, this system included data streams for each inverter, allowing for additional checks. There were two data loggers. When a data logger was reset, half of the inverters reported zero output. Most of the analysts missed these events.

After questionable data were identified, discussions of how to address each situation were essential. Such discussions may be greatly simplified if a contract specifies treatment of specific situations.

VII. CHALLENGES IN APPLYING THE TEST

A. Inconsistent historical and measured weather data

As described above, a choice is made between 1) using a consistent GHI-based model for both the predicted and expected energy calculations, and 2) using the available data to provide the lowest uncertainty calculations of the predicted and expected energies. Consistency is preferred, but reduced uncertainty is also desired, causing considerable debate about the preferred approach. Consistency requires that the historical and measured weather data have the same formats. But allowing differences between the data sets provides opportunity to reduce uncertainty in either the predicted or expected energy calculations (Table IV). Italics in Table IV indicate an opportunity to reduce uncertainty if the historical and measured data are not required to be consistent; other notes in Table IV indicate that the data constrain the accuracy.

Data recorded at high frequency show short-term measurements of higher irradiance than is seen with hourly averaging [11]. An hour that has constant irradiance of 500 W/m2 differs from an hour that has 0 irradiance for a half an hour and 1000 W/m2 irradiance for the other half an hour, but these will be identical in an hourly data set [12,13].

Full instrumentation to collect GHI, DHI, DNI, and POA irradiance data adds cost to a project, so DHI and DNI are

TABLE III

SYSTEMATIC APPROACH FOR SETTING FLAGS DURING VERIFICATION OF 15-MIN DATA

Flag Type Description Irradiance (W/m2)

Temperature (° C)

Wind Speed (m/s) Power (kW)

Time series Missing, duplicated timestamps – – – – Range Unreasonable value < -6 or > 1400 < -30 or > 50 < 0 or > 32 System dependent Missing Individual values are missing – – – –

Dead Values stuck at a single value over time. Use derivative.

< 0.0001 while value is > 5 < 0.0001 < 0.001 while

value is > 1 < 0.0001 while value is > 5% nameplate

Jump Unreasonable change between data points. Use derivative. > 800 in 15 min > 4 in 15 min > 10 in 15 min > 80% nameplate

Pairwise Significant difference between like sensors > 200 > 3 > 0.5 Power < 0 while

irradiance > 100 W/m2 Time-series pairwise

Values between like sensors change over time

Drift of average > 1%

Drift of average > 2 – –

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0882

Page 5: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

often omitted. As described above for a model implemented in PVsyst, using the complete TMY3 file instead of using only the GHI data resulted in a change of 0.9% in the predicted energy. This 0.9% could be considered small compared with the variability of the weather and the uncertainty of the absolute values in the historical weather data. But it could also be considered large to stakeholders who are estimating their marginal return on investment.

TABLE IV

ASPECTS OF WEATHER DATA THAT CAN BE CONSISTENT OR DIFFERENT TO REDUCE UNCERTAINTY

Data Aspect

Effect on Predicted Energy

Effect on Expected Energy

Frequency of data

aggregation

Hourly historical data limit modeling of

variable conditions

Frequent measured data can drive a more

accurate model

Use of DNI and DHI

Historical data with DNI and DHI allow more accurate

transposition to POA

Allows more accurate transposition, but adds

cost; most data sets are limited to GHI

GHI vs POA

Transposition of GHI to POA is required, adding uncertainty

Measurement of POA avoids the uncertainty of the transposition

Sensor type

Historical data have been validated relative

to thermopile data

Matched sensors improve repeatability by avoiding angle-of-incidence and spectral

corrections

In the unusual case when measured historical weather data are available for each POA and for matched reference cells, then the choice of measuring POA with a matched sensor would be obvious. A strategy for providing both consistency and low uncertainty is to separate the GHI model into a transposition model and a POA-based model, as shown in Fig. 1. Error in the transposition model then is only seen in the predicted energy, whereas the expected energy has substantially lower uncertainty. Because the outcome of a performance guarantee relies on a comparison of the expected and measured energies, this can substantially reduce the uncertainty of the test even if it does not reduce the overall risk to the investor.

B. Test measurement penetrates the intended system boundary

As shown in Table I, if module temperature and/or POA irradiance are measured, some aspects of system installation will lie outside of the test. For example, if the installation quality results in poor flow of air around the modules, the hotter module temperature will be taken as part of the weather, and, therefore, not be considered part of the test. Measuring POA irradiance potentially introduces two issues. The first was discussed in Section VII.A because of the inconsistency between the historical and measured weather data. The second is the possibility that the POA data could be erroneous

because of error in sensor placement or another effect (e.g., dark gravel in front of the POA sensor).

To assess the systematic error that could be introduced by a POA sensor that is incorrectly positioned or located in a location with an inappropriate albedo, we modeled several fixed-tilt systems, as shown in Figs. 4 and 5. Both the Perez and Hay transposition models were used [14]. The models had been carefully developed for specific systems and validated as described in reference [15]. The albedo and tilt (Fig. 4) and azimuthal orientation (Fig. 5) were then varied while all other model parameters were kept fixed.

Figure 4 shows that the effect of albedo depends strongly on tilt, as expected, with plausible variations as high as 2%. A direct measurement of the albedo within the plant could limit this uncertainty to <1%. When using POA irradiance, the albedo lies 1) outside of the test boundary if the albedo is confirmed to match the albedo in the model, or 2) inside of the test boundary if it matches the average albedo in the plant.

Fig. 4. Modeled energy yield as a function of albedo for variable tilt angles and two transposition models.

Fig. 5. Modeled energy yield as a function of azimuth for systems in Colorado, Florida, Washington DC, and the southwestern U.S.

Figure 5 shows that the sensitivity to azimuthal alignment depends on location. Expected errors are <1% if the azimuthal alignment is confirmed within 10° in most locations. The azimuthal alignment must be more tightly controlled if a project is trying to meet time-of-use goals. The largest sensitivity was seen for the system in Colorado, located at a site with a microclimate driven by nearby mountains that spawn clouds in the afternoon, favoring an azimuthal

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0883

Page 6: [IEEE 2014 IEEE 40th Photovoltaic Specialists Conference (PVSC) - Denver, CO, USA (2014.6.8-2014.6.13)] 2014 IEEE 40th Photovoltaic Specialist Conference (PVSC) - PV system energy

orientation toward the east. These, and other factors listed in Table 1, may inadvertently change the test result depending on the choice of test boundary (Fig. 1), underscoring the importance of deliberately defining the test boundary to align with the intended system boundary. If the test boundary does not align with the intended system boundary, in some cases separate tests may be used. For example, the azimuthal alignment of a POA sensor may be verified using a clear-sky model and the albedo may be measured directly.

C. Inaccurate model

Use of an inaccurate model for a performance guarantee places risk on both parties because the results of the test may depend on the weather during the test. For example, in northern climates, snow can introduce >10% error if it is not included in a model and the inclusion or exclusion of periods with snow can change the result of the test. Because this can affect the result, it is important that both parties agree whether such data should be removed. Error in today’s models may be dominated by 1) soiling losses, 2) loss from snow coverage (or gain from snow reflections), and 3) transposition of global to POA irradiance, as may be understood by a seasonal comparison of expected and measured energy. Every effort should be made to improve model accuracy. Approaches for managing these uncertainties should be carefully discussed when the test is designed.

Nevertheless, many seasonal issues could be assessed in other ways. For example, possible seasonal shading losses could be assessed through a site visit. A recent study to identify the best way to reduce the time of a performance test concluded that the largest contribution to seasonal variations in the test results was related to the transposition model [16]. Separating the transposition model from the POA-based model could facilitate shorter energy tests.

VI. FINAL DISCUSSION AND CONCLUSIONS

The challenges of separating variable weather from system performance and aligning historical with measured weather data provide allow inadvertent introduction of bias in an energy test. The energy-production models used for the initial prediction and the performance guarantee must be consistent, but separation of the transposition model from the PV system performance model may allow reduction in the uncertainty of the test for system performance. For a short test, uncertainty may be reduced based on a study of a single-axis tracked system showing that the transposition error varied seasonally by ~6% [16]. For an annual test, the uncertainty may be reduced by ~2% based on the difference between the annual energy calculated using the Hay and Perez models (Fig. 5). In every case, the test boundary must be aligned with the

intended system boundary so that all relevant aspects of system performance are included in the test.

ACKNOWLEDGEMENTS

The authors wish to thank M. Waters for valuable discussions. This work was completed under Contract No. DE-AC36-08GO28308 with the U.S. Department of Energy.

REFERENCES [1] E. Riley "Photovoltaic system model validation and

performance testing: What project developers and technology providers must do to support energy production estimates." Proc. 38th PVSC, 2012 IEEE, pp. 003053-003055.

[2] A. Mermoud "Use and validation of PVSYST, a user-friendly software for PV-system design." Proc. 13th European PV solar energy conference, Nice, France, pp. 736-739.

[3] D.L. King, J.A. Kratochvil, and W.E. Boyson ‘PV array performance model’ (United States. DOE, 2004).

[4] N. Blair, M. Mehos, C. Christensen, and C. Cameron, "Modeling PV and concentrating solar power trough performance, cost, and financing with the solar advisor model," Solar 2008, American Solar Energy Society, 2008.

[5] B. Marion, and M. Anderberg "PVWATTS An Online Performance Calculator for Grid-Connected PV Systems." Proc. ASES Solar 2000 Conference, Madison, WI.

[6] B. Marion "Comparison of predictive models for photovoltaic module performance." Proc. 33rd PVSC, 2008.

[7] B. Kroposki, et al. "PV module energy rating methodology development." Proc. 25th PVSC, p. 1311, 1996.

[8] S. Kurtz, et al., "Analysis of PV System Energy Performance Evaluation Method," NREL (2013) http://www.nrel.gov/docs/fy14osti/60628.pdf.

[9] R. Perez, R. Seals, and J. Michalsky, "All-weather model for sky luminance distribution – preliminary configuration and validation," Solar Energy, vol. 50, (3), pp. 235-245, 1993.

[10] "PV system energy performance evaluation method," under development, IEC Central Office: Geneva, Switzerland.

[11] S. Ransome, and P. Funtan "Why hourly averaged measurement data is insufficient to model PV system performance accurately." Proc. 20th European Photovoltaic Solar Energy Conference, Barcelona.

[12] S.J. Ransome "Modelling inaccuracies of PV energy yield simulations." Proc. 33rd IEEE PVSC, 10.1109/pvsc.2008.4922864.

[13] C.W. Hansen, J.S. Stein, and D. Riley, "Effect of Time Scale on Analysis of PV System Performance," SANDIA report, February, 2012.

[14] C.P. Cameron, W.E. Boyson, and D.M. Riley "Comparison of PV system performance-model predictions with measured PV system performance." Proc. 33rd PVSC, 2008, doi 10.1109/pvsc.2008.4922865.

[15] J. Freeman, et al., "System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report," National Renewable Energy Lab., Golden, CO (US), (2013).

[16] M. Waters, et al. "The Ability of Short Term Performance Tests to Reproduce the Results of a One-Year Adjusted Energy Test for Non-Concentrating PV Systems," 40th IEEE PVSC, 2014.

978-1-4799-4398-2/14/$31.00 ©2014 IEEE 0884