Five years since they were launched, a proposed overhaul of the Health Star Ratings could see one in five products have their ratings changed. But does it really go far enough?
When the Health Star Ratings were launched back in 2014, their intentions were not only noble, but necessary. Packaged foods and processed items were (and still are) shrouded in a litany of fanciful buzzwords like ‘fat-free’, ‘sugar-free’, and ‘high in protein’. Consumer choice was becoming more of a consumer guessing game, and the HSR system was meant to cut through all that white noise with an objective, quantifiable scale that would be easier to read than most nutritional labels.
Simply put, the HSR is a voluntary front-of-pack labelling (FoPL) scheme that rates the overall nutritional profile of packaged food. It calculates this by giving each product a “baseline” (or negative) points for the amount of saturated fat, total sugars, sodium and energy. They then receive “modifying” (or positive) points for the amount of protein, fibre, fruit and vegetables they contain. Points are then converted to a star rating, ranging from as low as 0.5 (Arnott’s Tim Tams) to the maximum of 5 (Yoplait’s natural Greek yoghurt).
While in theory the system works, in practice it’s produced some dubious results. So in an effort to address some of these issues, Australian firm MP Consulting was commissioned in 2017 to undertake the HSR system’s planned five-year review. In October, it released a consultation paper outlining ways in which the system could be improved, such as putting less emphasis on protein (most people either meet or exceed their protein intake already) and more emphasis on sugar (New Zealanders apparently consume six times more than the daily recommended amount). It also re-emphasised replacing the ‘as prepared’ rule for the ‘as sold’ rule, a loophole which allowed Milo to spend years boasting a 4.5 star rating.
A government advisory committee then took these recommendations and created a revamped HSR calculator (which is basically just a very confusingly laid-out Excel spreadsheet) allowing you to see what ratings a product would have under the new system. Kellogg’s Nutri-Grain, for example, would see its 4-star rating drop to a more sensible 2.5, while Nestle’s Milo cereal would also drop from 4 stars to 3.
Lunchbox favourites like Mother Earth Vege Fruit Sticks – which were called out in the 2017 Bad Taste Food Awards for their high sugar content – would see its 3-star rating drop to 2.5. Meanwhile, some products are set for an increase, such as Sanitarium’s Peanut Butter Spread, which would go from 4.5 stars to the maximum 5.
What’s more notable, however, is what isn’t changing. Paddle Pops, for instance, which duly got ridiculed on social media last year for their 3-star rating, would keep it under the proposed system. Up&Go would also keep its incredibly high rating of 4.5, as would the similarly rated Primo flavoured milk.
It’s important to note that the HSR is a system that’s designed for products in the same category to be compared to one another, which goes some way in explaining some of these ‘questionable’ ratings. The system isn’t designed for a dairy-based beverage to be compared to cereal, or a yoghurt to something like a muesli bar.
But most people don’t know this – they presume all HSRs are one and the same, probably because there’s nothing on the label to indicate otherwise. This could easily be addressed by a simple design change, but there doesn’t seem to be much in the paper that addresses the issue; neither does it address the system’s bias towards processed foods over less processed foods, or the fact that because the scheme is voluntary, comparing products in the same category is easier said than done.
But these are all just details. If anything, what the lack of change for most products really highlights is that the system is inherently flawed. At its core, the HSR is a system that allows a food’s positives (fibre and protein) to offset a food’s negatives (sugar and sodium). It’s the same rationale that argues that the benefit of a salad will offset the detriment of a soda: the body doesn’t work that way, and neither should the country’s food labelling system.
A better system than the HSR would be something like the Traffic Light System that’s used in the UK. Not only is it more comprehensive in the sense that consumers can look at individual nutritional components of a product (fat, sugar etc.), but it’s still simple enough to read at its most basic level – the more ‘reds’ on the label the worse it is, the more ‘greens’ the better.
Interestingly enough, Traffic Lights were preferred by health and consumer groups before the HSR was introduced, but were rejected by the food industry. So the HSR system was settled on as a compromise – the “least worst option” – for those on both sides of the issue.
Food labelling is a highly politicised process, and more often than not, what the food lobby wants, the food lobby gets. Sure, the HSR is great at providing overall nutritional value, but it’s terrible at telling you why a certain product has the rating it has. The “least worst option” is oversimplification at its finest, and it’s consumers who’ll be the ones who suffer the most.