Nowadays many devices are designed with efficient internal diagnostic
capabilities. For some smart transmitters the automatic diagnostics can detect
over 90% of all transmitter failures. Further, a good proof test procedure with
three point calibration may detect up to 97% of all transmitter failures.
But what happens when these two testing
outcomes come together into the requirements of the PFDavg (Probability of failure on demand average) calculation?
As the person responsible for SIL Verification, can I simply put the
same maximum test coverage as identified in the above example regarding the
value of 90% for internal device diagnostic and 97% for Proof Test Coverage?
Discussion
example 1:
As per the FMEDA
report for a suitable SIL capable transmitter, an Automatic Diagnostic (AD) can
detect 313 FIT (Failure in Time) (FIT = 1 failure / 109 hours) out
of a total of declared 347 FIT dangerous failures. The diagnostic coverage of
such a test can be calculated as: 313/347 which yields to a 90% DC factor (where DC = λDD /
(λDD + λDU)).
If the automatic diagnostic feature is not enabled in the SIS Logic Solver (for example due to a logic
solver not designed to detect over or under range signal from the transmitter) the
proof test (PT) impact as outlined within the FMEDA report identifies that some
338 FIT will be detected out of the total 347 FIT value. So here, the proof test
coverage would differ from the diagnostic test alone, i.e. 338/347 = 97% DC factor.
The numbers above essentially show that 25 FIT additional failures can be detected by the proof test
activity itself. This is because the proof test can detect some failures of the
device which cannot normally be detected by the internal transmitter diagnostics.
For example, “signal drift” failures are not normally detected by the automatic
diagnostics associated with a single transmitter, but can be detected by the proof
test when considering the requirements for undertaking “three point”
calibration.
However, if the diagnostic feature is implemented in the SIS (i.e. the logic solver detects over or
under range signals) then the proof test proportion of the testing in terms of
the earlier FIT values detects 338 FIT (by PT) – 313 FIT (by AD) = 25 FIT. This is because the proof test
is able to detect 338 FIT but we already recognise that 313 FIT have been
already detected by the automatic diagnostic. Therefore in this scenario only
25 FIT failures can be detected at the time of the actual proof test.
We cannot “detect” failures which are already detected (and experience
suggests that the transmitter is very likely to be repaired by the time of the proof
test schedule if the diagnostics have already detected a failure). So we are
really detecting 25 FIT out of the remaining 34 FIT (347 FIT (Total) – 313 FIT (AD) which will remain undetected after the automatic
diagnostic process.
So the proof test coverage on implementation of the diagnostic test
would in reality be: 25 FIT PT / 34 FIT = 74% DC Factor. This is in contrast to
the earlier 97% DC claim (and the remaining λdu FIT regardless of
the diagnostic being on or off is 9 FIT).
The table below provides a summary of the above description
regarding failure rates and Proof Test coverage for different types of tests.
So why is all of this important when calculating the PFDavg? The
answer is that often the proof test coverage data for the device without the diagnostic
is used in the calculation for the device where the diagnostic is actually enabled.
This then lends itself to ask what proof test coverage (CPT) should be
used in the PFDavg formula?
Discussion
example 2:
Case 1:
If we assume a 100% proof test coverage for the device from the table above then 34 FIT will be detected during such a proof test i.e. the PFDavg simplified equation will be:
Case 1:
If we assume a 100% proof test coverage for the device from the table above then 34 FIT will be detected during such a proof test i.e. the PFDavg simplified equation will be:
PFDavg=0.5*CPT*λDU *T1
where:
CPT : Proof test coverage = 100%
T1 : Proof test interval
λDU : from the FMEDA equals 34 FIT.
where:
CPT : Proof test coverage = 100%
T1 : Proof test interval
λDU : from the FMEDA equals 34 FIT.
It means that 34 FIT failures are detected by the
proof test so the part ‘CPT*λDU’ equals to 34 FIT.
Case 2:
The FMEDA reports indicate that a Proof test is not perfect (in this example 9 FIT will not be detected over the lifetime of the SIS). Proof test is able to detect additional 25 FIT out of those 34 FIT failures remaining after the automatic diagnostic test. So the proper simplified equation would be changed to reflect:
The FMEDA reports indicate that a Proof test is not perfect (in this example 9 FIT will not be detected over the lifetime of the SIS). Proof test is able to detect additional 25 FIT out of those 34 FIT failures remaining after the automatic diagnostic test. So the proper simplified equation would be changed to reflect:
PFDavg = 0.5*CPT*λDU *T1 + 0.5*(1-CPT)*λDU
*LT
where:
λDU : from the FMEDA = 34 FIT
CPT : proof test coverage
T1 : proof test interval
LT : transmitter life time
where:
λDU : from the FMEDA = 34 FIT
CPT : proof test coverage
T1 : proof test interval
LT : transmitter life time
We know that 34 FIT for λDU from Case 1 shall be
distributed between two parts of the PFDavg equation. The part: ‘CPT*λDU’ must be equal to 25 FIT and is
related to T1 and 9 FIT shall be related to LT.
If we wrongly assumed the proof test coverage of 97%, the ‘CPT*λDU’
would be equal to 0.97*34 = 33 FIT, which would be incorrect because as
indicated above, 25 FIT are detected by the proof test alone. We must then use
CPT of 74% (which is the factor of failures detected by the proof
test but which are undetected by the automatic diagnostic for failures
remaining undetected after the automatic diagnostic test). If we use CPT
of 74% then the equation part ‘CPT*λDU’ equals 0.74*34 = 25
FIT and this is the proper value. However, in the context of the overall PFDavg
sum, this is negated by the lifetime impact
of the SIS regarding the second half of the equation and in some cases can significantly affect the final
PFDavg calculation.
Concluding the above, we should be careful when assessing the proof
test coverage for devices with an automatic diagnostic.
We should keep in mind that the failures which are already
detected by the automatic diagnostic cannot be counted again as detected during
a proof test.
If we neglect the above conclusions, our PFDavg/PFH calculations might
lead to more optimistic results, i.e. the real risk reduction delivered by SIF might
be lower than it really is.
The effect of the wrongly assessed proof test coverage might even be
bigger for the logic solver devices where automatic diagnostic coverage is usually
high and where the real proof test coverage might be much lower than assumed. It
is also likely that the proof test will not be able to detect any extra random
hardware failures comparing to those already detected by an automatic
diagnostic.
A similar approach should be used when assessing proof test
coverage for valves used in safety instrumented functions (SIFs) with partial
valve stroke testing capabilities – this will be a topic for the next SLCC blog.
The takeaway
question:
Do you consider the above criterion for proof test coverage of the SIF devices when calculating the PFDavg for the SIF?
Do you consider the above criterion for proof test coverage of the SIF devices when calculating the PFDavg for the SIF?
Need help with any of the terminology? Try our Safety Terms Jargon Buster.
0 comments :
Post a Comment