One of the things that scares me about the US criminal justice system is that it relies almost entirely on magical thinking and torture to secure convictions. I don't believe anyone is guilty unless they were literally photographed holding the bloody knife while standing over the body and even then I'll need some convincing to rule out potential mitigating circumstances.
A few years back the Danish Data Protection Agency found out that there was several major errors in the cell phone data the police had been using in criminal cases to establish the location of offices and by extension their owners. The system would sometimes mess up the locations of a caller and the phone receiving the call.
The agency also criticised the police for having done nothing to check the quality of data before using it in court. The courts and prosecutors and defenders also deserves blame as they never challenged the quality of the data.
All the non-computer people just assumed that when the computer said something then it must also be true. Why wouldn't it be, after all computers are magic and incorruptible truth-machines.
And even then cell phone data is often thought to be more precise than it really is. Often it is assumed that a phone connects to the nearest cell phone mast but that is not always the case, especially in built-up areas where the signal from the closest mast is not always the strongest. These data can have up to 30 km of inaccuracy.
Yeah, you can trust a computer if you're know what you're doing and not. Societally, though? Fucking hell no.
Poland had some AI driven nonsense for their benefits claims for a while. Except of course it was always, in the end, a human who said "yeah do whatever". Except basically none of them ever did, it got so bad with it they just tossed it.
>As presented in last year’s report, in May 2014, the Ministry of Labor and Social Policy introduced a simple ADM system that profiles unemployed people and assigns them three categories that determine the type of assistance they can obtain from local labor office. Panoptykon Foundation, and other NGOs critical of the system, have been arguing that the questionnaire used to evaluate the situation of unemployed people, and the system that makes decisions based on it, is discriminatory, lacks transparency, and infringes data protection rights. Once the system makes a decision based on the data, the labor office employee can change the profile selection before approving the decision and ending the process. Yet according to official data, employees modify the system’s selection in less than 1% of cases. This shows that, even if ADM systems are only being used to offer suggestions to humans, they greatly influence the final decision.