Methodology (you might want to skip this bit 🙂 )
Any study like this has it’s drawbacks of course, but hopefully it’s a pretty good snapshot of how safe some of the main helmet brands will perform in an accident, relative to each other.
This table relies on SHARP crash helmet testing data only (covering 2010 to 2016) so it’s never going to be fully comprehensive. And we’ve not included every helmet brand in the list. There’s a few reasons for this. Maybe they’ve not been tested enough to give a reasonably reliable amount of data – or maybe they’ve not been tested at all. Or maybe they’ve so little distribution, that we’ve chosen to leave a brand out. We’ve tended to focus on the main brands – meaning brands that are more widely known and which helmet buyers will want to know about.
Sorry Sparx, Osbe, Halfords and the like.
Our main drawback is the limited number of helmets tested for some brands which may slant the figures – SHARP choose and buy the helmets themselves, so that’s bound to skew the figures. If a brand’s helmets haven’t been chosen for testing, then they won’t appear in our table.
As alluded to above, to avoid sample size skewing, we’ve excluded some brands where a brand hasn’t had a reasonable tested sample size. Why? Well, imagine one brand has 10 helmets tested with an average score of 3 stars, they could be below a brand with just one helmet scoring 4. So because of this, where there’s only a handful of helmets available to score, we’ve usually removed the brand from the survey.
It’s worth pointing out that there are some detractors of the SHARP test too, reckoning that it’s not real world enough. Which may or may not be true. However, we think it’s about as good as it gets – you can read what the test entails here and an analysis of SHARP data here and make your own mind up if you like.
Whatever your point of view, what is going for the SHARP testing regime is that it’s held under controlled circumstances in a laboratory so each helmet should be subject to an identical test – meaning it’s possible to compare the results of each test on each helmet. Yes, agreed, it might not fully simulate the accident where you hit diesel while hanging off your Z1000 and bash your helmet on a curbstone at a 15 degree angle then scrail it down the road for 100 yards, but it does subject the helmet to impacts from multiple sides and show which individual helmets, all things being equal, perform best. So, we reckon it’s about as good information as is available and that’s what we’re basing this analysis on.
The scoring is simple. Where a helmet was awarded five stars, we’ve given it 5 points. Where it scored one star we’ve given it 1 point. We then add up the total number of points and divide it by the number of helmets tested to find the average (mean). We then ordered the list, putting the highest scoring first. In the event of a tie-break, we also looked at helmet scores from the last couple of years – so where two brands have scored the same, we use the scores from their most recent tests to choose a winner.
Phew. Till next time!