More than $1 billion is spent in Australia every year on distributed photovoltaic systems, from small household systems to 100MW-plus power stations. In every case, the systems are real power stations that form part of the electricity infrastructure of this nation. If we spent $1 billion every year on a new coal-fired power station we would demand rigour and controls to ensure we were getting what we paid for and that it would function as it was specified to function. Why should PV solar be any different?
Solar panels are often regarded as a commodity and a technology that is 100% reliable. They can be.
Most solar panels are made using silicon solar cells. Silicon is almost over-qualified for the job of making electricity; a bit like using a race horse to collect mail at the end of a driveway. Although it may be over-qualified, silicon solar cells are nonetheless thoroughly reliable and capable of doing the job for decades.
There are many exceptions, but in general Australia has only an emerging culture of checking panel quality. Various reasons are cited for not testing, such as: “The manufacturer has guaranteed the panel performance,” or, “No-one else has had any problems with poor panel performance in Australia,” and, “No-one else does any testing.”
Each of those assertions is false. If it is not checked, what is the value of a manufacturer’s guarantee? Problems are rarely advertised but they do exist. On a global scale, Australia has one of the lowest rates of panel testing. In many other countries, testing is mandatory before a solar plant can be financed.
Measure of value
Solar panels are sold with a nominal power output, typically 250-320 Watts-peak (Wp) and are priced in $/Wp. Every solar panel is tested at the end of every production line so a manufacturer knows the real power output of every panel. One of the first questions we can therefore ask is: Do you get what you pay for in Watts-peak?
Until recently there has been no possibility to test the power output of solar panels in Australia (outdoor measurements are not accurate enough) and increasing the nominal output is an easy way for an unscrupulous manufacturer to make some free money.
A simple look at some statistics shows that we do have a quality problem in Australia. The graph below shows two different sorts of measurements. The output of each panel has been compared to the nominal output, so a score of 0% means the panel produced exactly what it promised. One set of measurements is from many different manufacturers of both suspected high- and low-quality products, in each case the manufacturer did not know that these panels would be tested.
The results are variable and the worst panel is 12% under promised power. Most of these panels are new; 12% under power may not sound like a lot, perhaps the system was 30% cheaper so you may imagine that the customer has won. Remember, however, that the manufacturer knows the real output of the panel. If a manufacturer is happy to over-rate a panel output by even 5%, what other shortcuts has it taken in the manufacturing process?
The other set of measurements is from a single, higher-end manufacturer who did know that a statistical sample of panels would be tested from the larger batch. There is much less variance and all panels produced slightly more power than their label rating. This trend (a difference in power output when a manufacturer is expecting testing) is replicated in testing laboratories around the world. It is like a teenager being on best behaviour when he knows you are watching him – but willing to see what he can get away with as soon as you look away.
A finished solar panel will be sold. The manufacturer considers only two questions: “How much can I get for this panel?” and, “Where can I sell it?” Unfortunately, due to our reputation for not having tested much in the past, many poorer panels are destined for the Australian market.
There are multiple opportunities for a manufacturer to cut corners or for imperfections inherent in any production process to creep into the market. Starting from the top of the panel:
- The frame may not be square (increasing installation costs).
- The panel may be labelled “PID-free”, but many panels that carry this label are not PID-free. (PID refers to “potential induced degradation” and is a problem that can have catastrophic implications for power output, particularly for large solar farms where many panels are connected in a single string and the voltage [potential] differences are high).
- The encapsulation foil may not contain UV-blockers. For a manufacturer, this increases power output on day one and reduces cost, since the UV-blockers are the most expensive component of a solar panel. But due to our high UV it is not a good idea if the panel will be mounted outdoors in Australia!
- The backsheet may deteriorate on exposure to sunlight.
- Most significantly for performance on day one, the cells may contain micro-cracks (cracks that are not visible with the naked eye) and/or produce below their nominal power rating.
Both CSIRO’s PV Performance Laboratory (at its Energy Centre in Newcastle) and PV Lab Australia (in Canberra) offer commercial testing programs for solar panels.
PV Lab Australia quality assurance procedures have been developed over more than a decade to provide information about the performance of modules and their long-term reliability. In doing so, they enable clients to safeguard investments in their installations and their reputation. PV Lab uses only Class AAA sun simulators in accordance with International Electrotechnical Committee standards and internationally recognised sampling techniques to ensure thorough and reliable testing.
Testing generally looks to answer three questions. First, did I get what I paid for on day one? This is determined using an STC power test with a sun simulator. The sun simulator measures power output under standard test conditions (STC).
Secondly, were the panels damaged before they left the factory or in transport? This is determined using an electro-luminescence (EL) test (or very soon a photo-luminescence test). In many cases micro-cracks, not visible to the naked eye but likely to have a future impact on power output, can form during handling or, for example, if a container is dropped at port. The picture below shows a severely damaged panel. Again, most of these cracks were not visible to the naked eye.
Thirdly, are the panels likely to last? This is determined using a suite of tests including PID test for longer-term degradation and wet leakage tests for susceptibility of a panel to water ingress.
Lift the game
Quality assurance processes are happening in Australia and some large farms are built to a standard and using appropriate checks and balances that we could expect for a large power plant. In many cases, however, there is no real quality assurance. By the time problems are identified they are inevitably time-consuming and expensive to fix. Good quality assurance is not a cost; it is a method of reducing cost through the proper allocation of risk.
When purchasers have a conversation with manufacturers covering how many panels will be tested, which tests will be done and what will be done if a problem is encountered they effectively agree that panel quality is the manufacturer’s risk and the manufacturer manages the risk instead of avoiding it. Done in this way, the cost of a good-quality assurance program is low.
It’s time for Australia to join the rest of the world and recognise that a PV system, no matter the size, is a real power station that should last for decades. We should be demanding that our silicon race horses can collect the mail well into the next generation.
Dr. Michelle McCann is a partner at PV Lab in Canberra. She has worked in photovoltaics since 1998 and has twice held a world record for high efficiency solar cells.