A recent scandal at the UK Post Office has highlighted the need for legal reform as organizations increasingly adopt artificial intelligence to augment decision-making processes.

Over 20 years ago, Fujitsu, a Japanese technology conglomerate, developed Horizon, an accounting software system for the UK Post Office. The system was distributed to branches across the country, including small, independently run shops that collect and deliver letters and parcels to tens of thousands of people. Horizon was the largest civilian IT project rollout in Europe at the time. However, Horizon was plagued with errors, which is a common phase for any new IT system. But these were not typical bugs. In some cases, the system displayed incorrect data to staff regarding the amount of money deposited at the end of the day’s trading, leading to the “loss” of thousands of pounds every night.

Many Post Office workers were accused of theft or false accounting, and they were given an ultimatum to either pay the difference or face prosecution. From 2000 to 2014, over 700 people were prosecuted, and on average, 30 were imprisoned each year. This led to the ruin of homes, livelihoods, and relationships. Some affected individuals even took their own lives.

One of the most troubling aspects of this scandal is that Fujitsu has recently admitted to knowing about the system’s bugs when it was delivered to the Post Office in 1999. However, one issue that has not received much attention is that the laws of England and Wales presume that computer systems do not make errors, making it difficult to challenge computer output. Governments worldwide that have such laws need to review them, as they have implications for new IT systems, especially those using artificial intelligence (AI). Companies are enhancing their decision-making capabilities by combining AI with IT systems. It is unthinkable that this is occurring under legal systems that assume computer evidence is reliable. Until such laws are reviewed, innocent people are at risk of being denied justice when AI-enhanced IT systems are found to be in error.

The central source of potential injustice with a law that presumes computer operations are fundamentally correct is that the burden of proof is on the defendant to demonstrate improper use or operation of the computer system. This could include a record of the software’s relevant code or keystroke logs. However, accessing this information is challenging. In most Horizon cases, defendants had no idea which documents or records would show that a relevant error had occurred, and therefore, they could not request that these be disclosed by the Post Office when they were taken to court. This imbalance of knowledge meant that individuals had little hope of defending themselves against the charges.

Some lawyers and researchers involved in the Post Office prosecution defence are suggesting a different approach. Paul Marshall, a barrister at Cornerstone Barristers in London, and his colleagues argue in an article (P. Marshall et al. Digit. Evid. Electron. Signat. Law Rev. 18, 18–26; 2021) that the presumption that computer evidence is reliable needs to be replaced with a requirement that relevant data and code will be disclosed in legal cases. When necessary, such disclosure should include information-security standards and protocols followed; reports of audits of systems; evidence showing that error reports and system changes were reliably managed; and records of steps taken to ensure evidence is not tampered with.

One group of claimants challenged the Post Office over being wrongfully accused and requested relevant documents in the Horizon case, which were produced. This group sought help from IT specialists and eventually won their case in 2019. In individual cases in which defendants sought specialist help, the Post Office settled out of court, with defendants having to sign non-disclosure agreements, meaning the computer evidence remained hidden.

The processes of IT systems must be explained in legal cases. There are ways to establish transparency without disclosing trade secrets, which is a concern for some organizations and businesses. Sandra Wachter, a researcher in AI at the University of Oxford, UK, says that tools exist that can explain how automated systems make decisions without revealing everything about how an algorithm works.

In the early days of personal computing in the 1980s, the law did not presume that a computer’s operations were correct. Proof to this effect was required for computer-generated information to be admissible in courts. This law was changed in 1999, recognizing that the reliability of computers had improved. However, the pendulum has swung too far in the other direction.

As AI technologies become more mainstream, legal cases involving these systems will also increase. Computer evidence cannot be assumed to be reliable, and relevant laws must be reviewed and updated to ensure justice for all.

Source Lnik

By Editor

Leave a Reply