Jump to section
As you probably know, we’re big fans of the SHARP helmet testing scheme here at BCH. How else can you have the foggiest idea of how safe your helmet is without those nice people at SHARP being here to tell us?
So we were delighted to be invited along to one of the SHARP contracted testing labs to see a helmet or two being dropped, plunged and hurled in a variety of rigorously-controlled and scientifically-measured ways.
If you’re unsure what SHARP is (that’s the Safety Helmet Assessment and Rating Programme), then you might want to read this article.
You might also find our definitive guide to crash helmet safety useful too which analyses SHARP data to find lots of interesting info about helmet weights and which are the safest crash helmets.
This particular contracted lab was in Salford and when we rocked up, they’d set up a few ‘test’ drops for us to check out because, as I found out, security and impartiality is taken exceedingly seriously when testing helmets. Meaning, they couldn’t/shouldn’t/don’t do live helmet testing with unnecessary interlopers, such as myself, in attendance.
And rightly so.
The other thing that quickly became apparent is that they’re mega serious about exacting standards and data integrity.
That was exemplified straight away when they demonstrated how they set up and mark each helmet for testing.
One of the key things with SHARP is that EVERY crash helmet is tested in EXACTLY the same way – so all results are directly comparable.
So they mark the points where every impact test is going to hit directly on the surface of the helmet using Lazers and with inert markers. To do that, they insert a (strictly controlled) head form into a helmet, then take an absolute age making sure it’s precisely placed onto that head form in the same way as every other helmet has been placed since SHARP began in 2007.
By the way, the black thing on the top of the helmet is a weight – the same weight as used in Eu ECE testing in fact – to make sure every helmet is seated on the head form in precisely the same way.
Once each of seven test helmets has been accurately marked – and once the test rig has been calibrated and the calibrated accellerometer added to the mix – and after the headform with helmet has been precisely loaded onto the rig (using a mixture of highly trained lab expert, patience and lazers – they love their lazers at SHARP) then everything was ready to begin.
They use two testing rigs – one for the front, rear and side impact tests. And another for the crown impact test.
Which is awfully decent of them because that’s a lot of extra effort to test just the crown (top of the head) when the stats say only around 2% of impacts occur there.
ECE 22.05 and SHARP
SHARP builds on and supplements the basic statutory ECE testing.
That’s because the ECE regulation tests stuff like chin strap, field of vision and conditioning (heating up helmets, cooling them down, humidifying them etc. etc.) which means SHARP doesn’t have to. They can then take only ECE 22.05 approved helmets knowing they’ve all passed that ‘base level’ of testing – then build on it with their own testing.
While it’s great that all our helmets on sale in Europe (and now Australia) must pass this particular test, SHARP was built upon the finding that there’s still a massive variety of performance difference between these ECE approved helmets.
And while ECE testing impact tests at hot and cold temperatures – to ensure helmets provide a minimum level of protection through the different seasons – SHARP testing is only carried out at ambient temperature. Though of course, that’s precisely the same ambient temperature for each helmet tested.
They also incorporated one or two improvements to the ECE testing to further increase accuracy and repeatability of their tests.
For example, when helmets are dropped during an ECE test, they’re not held entirely firmly in the rig, so they can and do fly off in different directions after impact – and that reduces the amount of force applied directly to the helmet by a random amount each time.
SHARP, on the other hand, not only checks that the exact same point on the surface of the helmet is impacted each time (yup, you guessed it, using more lazers!) but it holds the helmet firm on the rig too so it can’t bounce around.
That means the same amount of testing force goes through the helmet, every time. And that leads to more consistently comparable results. Job done.
While ECE testing isn’t based on the Cost 327 findings (which SHARP is) both systems do share similar methodologies (SHARP incorporated the best bits of ECE testing. So things like the headform shape and weight, the helmet positioning, the anvils used etc. etc. are all common between SHARP and UN ECE.
The massive attention the guys at SHARP give to accurate and repeatable testing is what makes it so meaningful and useful.
Back onto the testing of the crown impact (top of the helmet).
While our dedicated SHARP team use a separate testing rig to test it, apparently the ECE test takes a short cut.
What do you do when you want to test the crown but your testing rig doesn’t really let you? Why, you saw the chin guard off first of course!
Yeah, sounds a bit odd to us too, but that’s exactly what ECE 22.05 testing does.
It does seem a little bizarre for all manner of reasons – not least in compromising the integrity of the helmet as a whole. But there you go, I’m sure they have their reasons (surely the Eurozone can’t be THAT skint they can’t afford a different testing rig?!?)
Meanwhile, back at the lab we went through a front impact test, a rear and a side impact test. SHARP impacts both sides of the helmet whereas ECE only does one side – the one that appears to be the weakest such as where you find sun visor sliders and other such gubbins.
They also carried out an oblique test.
60ish % of real-world helmet impacts are oblique according to Europe’s most comprehensive analysis of real-world motorcycle accidents – Cost 327. That means rather than a straight-on impact (like headbutting a wall) most involve an angled impact.
So SHARP simulates that too. They take a helmet, stick a (bloody heavy!) headform in it, place the helmet on a testing rig, then drop the helmet onto a solid angled metal anvil that’s covered in an abrasive material (see a photo of the result at the bottom).
In this test, the helmet is free to move after the impact and they use the same abrasive impact material for each and every oblique test so the helmets will dig in a bit and cause rotation in a similar way you’d find if you hit the surface of the road at speed.
This test makes a right old racket and because the helmet is free to fly off after the test, they’ve a padded ‘catch box’ that’s pushed up to the rig to catch the helmet.
SHARP testing and Flip-up helmets
I know quite a few people have questions about how they test flip-up helmets and the various forms of ECE homologations – I know I have – so now was a great time to ask the team and find some answers.
The first one is about the figure SHARP reports on its website showing how many times a modular helmet’s ‘faceguard remained fully locked’.
That wording and what the figure actually means always has always needed a bit of clarification for me and for many people on the various forums I check out.
When SHARP impact tests a modular helmet, the face guard is always down, closed and locked. They never test with the guard up and open.
After each impact test, they check to see if the guard is still locked. That’s it. They don’t look to see if the guard has opened – they check if there’s any unlocking gone on – that’s all. It could be one of the two locks has opened or it could be both. The chin guard could be fully open or just cracked open a little bit. They’re all a lock-fail and reduce that 100% perfect score.
So of the 30 impacts each helmet model undergoes, if the chin guard unlocks, say, 15 times; the little padlock graphic on the SHARP website would show a figure of 50% – whether the chin guard flew off or it was unlocked on one side only but still looked closed.
Modulars – and ECE P/J ratings?
The second question was more about ECE 22.05 approval than SHARP.
We always report when a helmet has been ECE dual-homologated (or scored both P & J ratings – same thing). But there is a rating of NP meaning the chin guard is non protective.
I wanted to know how often SHARP see modulars that are NP rated.
The answer – just once. And that was a long time ago. Many flip-ups aren’t J rated meaning they can’t legally be used with the chin guard up – but pretty well all that are on the market today are approved to be used with their chin guard down.
So, you’re OK wearing your modular like a full face helmet on the road, but if you want a flip-up helmet that’s good to go with the chin guard up, then check out our dual-homologated modulars and you’ll be fine.
SHARP test cheaters?
Another thing we chatted about is manufacturers building helmets just to pass a standard.
If they want to pass the SHARP test with flying colours, won’t they just reinforce their helmets at the points they know are going to be tested – maybe leaving other parts less well performing?
Well, SHARP cover that too. They dismantle each helmet model and check that there’s nothing dodgy going on – no extra padding or reinforcement or other subterfuge being employed.
They also check that helmets continue to be built the same way throughout the manufacturing run by testing samples of the helmet sometimes years down the line, to ensure standards are being maintained.
Thankfully, they’ve pretty well always found helmets carry on achieving the same ratings, so helmet makers don’t seem to be trying to cheat the system.
SHARP are bikers too
Finally, it was great to hear that the main guy at SHARP – the one who’s spearheaded the initiative for the last 6 years – is a biker too. Not only does he ride a Ducati 750ss, but he’s an off-roader and general bike fiddler and bolt twiddler too.
And I just had to ask about the helmet he wears! While he wouldn’t divulge the brand – understandably – he did confirm that it’s a SHARP 5 star rated helmet. Good to hear (and mine is too by the way – today turning up in a Caberg Duke).
Here’s a couple of pictures of the dummy tests the guys ran while I was there. As you can see from the cracks and scuffs – these tests don’t mess about.
The first shows the helmet after the side impact test. A crack doesn’t necessarily mean a fail at all. What it does mean is that the helmet shell has absorbed the impact load up to and past the point of failure. If that’s stopped the impact being transmitted to the rider’s skull and brain, then it’s job done (well). But only the accelerometer readings will show the truth of the matter.
The second picture shows the end results of the oblique friction test. Many of you will know just how close to a real world impact/scuff mark this looks – which is the objective. The helmet hits the surface, the friction between the two surfaces tries to spin the helmet, and the forces that go through the helmet to the rider are measured.
At the end of the day, I’m convinced.
Not only are the SHARP team a very nice bunch of people – who also ride bikes like the rest of us. But the SHARP process seems to be based on the best real-world accident data, the best available methodology, underpinned by a team that’s committed to incredibly careful testing using the best testing rigs around. And it continues – in our humble evaluation – to be the best way for us to compare how well a range of crash helmets will protect us in a survivable accident.
Long may it continue (and thanks guys!)