Issued 21st February 2012
Recent years have seen the progress of DNA profiling from relatively crude methods that could obtain results from visible body fluids such as blood and saliva. Because of the limited areas of DNA being profiled, it was necessary to produce statistics to estimate the probability that the profile could be obtained from anyone other than the suspect; such statistics were in the millions. Improvements in sensitivity and specificity mean that we can now profile invisible stains and produce much greater statistics. With these improvements, a ‘ceiling’ statistic of a billion (thousand million) was introduced to ensure confidence in the reliability of the calculations involved. Therefore, a straightforward single-person profile is normally reported to have a chance of one in more than a billion of coming from another unrelated person.
Mixtures of DNA from different people can be notoriously difficult to assess. Until recently, some of these were regarded by DNA analysts as uninterpretable. Recent trials in which we have been involved have seen the introduction of statistics using complex and not widely accepted computer programmes to produce statistics for profiles that the DNA expert has been unable to provide statistics using conventional methods. Debate continues within the scientific community even about the best way to interpret mixtures with clear, unambiguous alleles. Some of these exhibit the features normally associated with Low Template DNA (LTDNA); that is, variable results. The interpretation of such profiles remains controversial with various approaches being proposed.
Recent trials in which we have been involved have seen the introduction of evidence from such statistical software programmes. A trial in Oxford in 2010 heard testimony from statisticians from both the UK and the USA, presenting similar, yet different, computer models of the statistical evaluation of complex DNA mixtures. The judge ruled that the statistics from the UK-based statistician be admitted to evidence, but ruled that statistics from the American programme as ‘not yet ready’ for admission. The accused was convicted.
In a subsequent trial in Northern Ireland (concluded in January 2012), the same American expert provided testimony of his statistical programme. Its admissibility was challenged, but the judge ruled that that the system was at a stage that it can be regarded as reliable and admitted to evidence. The statistics that were generated by this programme went into the trillions; therefore providing apparently even better evidence from complex mixtures than what is provided from a straightforward single source profile. This is illogical. One accused was convicted, while the other was found not guilty.
More recently in February 2012, statistics provided by the same UK-based statistician as in the above Oxford case were presented in evidence in a murder trial in Liverpool. Professor Allan Jamieson gave evidence in which he accepted the conclusion of the Crown’s DNA expert, that the profiles were not capable of conventional statistical analysis, but challenged the reliability of the output of the statistical programme. The defendant was found not guilty.
Of course, it is impossible to know the effect that this evidence had on the various decisions. But, aside from the novelty of such programmes and the current debate regarding how to interpret ‘normal’ mixtures, the acceptance of these statistical models by the prosecution appears contrary to the increasing recognition that scientific techniques used in courts (and it is arguable whether statistics is a science per se) should be tested for their reliability before being used.
The International Society of Forensic Genetics stated,
“ The implementation of such an approach in routine casework, in particular when using a computer-based expert system for mixture interpretation, requires an extensive validation of the variable parameters such as Hb and Mx, as well as appropriate guidelines for all laboratory procedures.”
These have not been performed for at least one of the programmes now being used in UK courts. The National Academy of Sciences of the United States 2009 report on the state of forensic science was clear,
“ The simple reality is that the interpretation of forensic evidence is not always based on scientific studies to determine its validity. This is a serious problem. Although research has been done in some disciplines, there is a notable dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods. …
However, some courts appear to be loath to insist on such research as a condition of admitting forensic science evidence in criminal cases, perhaps because to do so would likely “demand more by way of validation than the disciplines can presently offer.” …
The bottom line is simple: In a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem.”
We continue to challenge this interpretation of DNA data, but it is time for a comprehensive, independent, and authoritative review of the reliability and limitations of software programmes being used to provide statistical estimates from DNA profiles in court.