Proponents
Supporters argue that the use of data analytics can eliminate or curtail inherent judges’ biases. For example, studies have shown that judges often assign higher bail, longer sentences, and are more likely to give capital punishment to defendants who are black. Data assessment tools only recommend decisions to judges. Defendants in the future, however, may advocate replacing judges with Artificial Intelligence, thereby eliminating racism and human bias from judicial decisions.
Proponents also emphasize socioeconomic inequalities in the current justice system. New Jersey, for example, passed a law replacing its bail system with a risk assessment tool that would instead give a recommendation to a judge. Prior to the law's passage, one former defendant testified before the New Jersey Senate’s Law and Public Safety Committee, telling them, "I sat in jail for four months solely because of my inability to pay $3,000 guaranteeing my release”. Advocates argue that using risk assessment tools to abolish bail levels the playing field for all people, no matter their socioeconomic status.
Detractors
Some argue that data analytics could actual deprive inmates and defendants of their rights. Eric Loomis, a former defendant from Wisconsin, sued the state for what he alleged was a deprivation of his Constitutional right to see the evidence against him. Many algorithms are created by private companies who do not want to reveal how their formulas work. A Wisconsin court decided that Loomis had the right to access what score the formula gave him but not to know how it reached that score while the United States Supreme Court denied Loomis’s appeal. Such lack of transparency has data-skeptics crying foul.
Critics claim that without unfettered access to the formulas behind data tools, these tools could actually perpetrate bias against defendants and inmates. A study of Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), one of the nation’s leading risk assessment tools, indicates that detractors have a point: these assessments can be racist. Black defendants who did not commit new crimes were found to have been far more likely to be given a high risk assessment score than white defendants in the same situation.
Some detractors argue that unfair biases could become worse with the institution of more complex technology. Artificial Intelligence could use race and socio-economic status as direct factors in measuring defendants' risk. The Artificial Intelligence being discussed is, by nature, self-correcting. Self-correcting tools should be expected to not only be independent, but dynamic. Critics claim that such characteristics may make attempts to analyze relevant technology and discover illegal or unethical biases challenging, if not impossible. The United States Constitution guarantees defendants "due process" and "equal protection of the laws". A future system that relied upon unchallengeable and secret formulas could violate both mandates, leaving defendants powerless.
Supporters argue that the use of data analytics can eliminate or curtail inherent judges’ biases. For example, studies have shown that judges often assign higher bail, longer sentences, and are more likely to give capital punishment to defendants who are black. Data assessment tools only recommend decisions to judges. Defendants in the future, however, may advocate replacing judges with Artificial Intelligence, thereby eliminating racism and human bias from judicial decisions.
Proponents also emphasize socioeconomic inequalities in the current justice system. New Jersey, for example, passed a law replacing its bail system with a risk assessment tool that would instead give a recommendation to a judge. Prior to the law's passage, one former defendant testified before the New Jersey Senate’s Law and Public Safety Committee, telling them, "I sat in jail for four months solely because of my inability to pay $3,000 guaranteeing my release”. Advocates argue that using risk assessment tools to abolish bail levels the playing field for all people, no matter their socioeconomic status.
Detractors
Some argue that data analytics could actual deprive inmates and defendants of their rights. Eric Loomis, a former defendant from Wisconsin, sued the state for what he alleged was a deprivation of his Constitutional right to see the evidence against him. Many algorithms are created by private companies who do not want to reveal how their formulas work. A Wisconsin court decided that Loomis had the right to access what score the formula gave him but not to know how it reached that score while the United States Supreme Court denied Loomis’s appeal. Such lack of transparency has data-skeptics crying foul.
Critics claim that without unfettered access to the formulas behind data tools, these tools could actually perpetrate bias against defendants and inmates. A study of Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), one of the nation’s leading risk assessment tools, indicates that detractors have a point: these assessments can be racist. Black defendants who did not commit new crimes were found to have been far more likely to be given a high risk assessment score than white defendants in the same situation.
Some detractors argue that unfair biases could become worse with the institution of more complex technology. Artificial Intelligence could use race and socio-economic status as direct factors in measuring defendants' risk. The Artificial Intelligence being discussed is, by nature, self-correcting. Self-correcting tools should be expected to not only be independent, but dynamic. Critics claim that such characteristics may make attempts to analyze relevant technology and discover illegal or unethical biases challenging, if not impossible. The United States Constitution guarantees defendants "due process" and "equal protection of the laws". A future system that relied upon unchallengeable and secret formulas could violate both mandates, leaving defendants powerless.