Главная | Обратная связь | Поможем написать вашу работу!
МегаЛекции

Individual harm. Challenges for researchers. Non-transparency. Evaluation




Individual harm

A final limitation of these kinds of tools relates to the way they link with decisions made, either by law enforcement agencies or in the criminal justice system. Being based on correlation, the analysis can suggest high-risk individuals or locations, but is generally unable to explain why. There are important questions of fairness, par- ticularly in approaching criminal justice decisions around questions of the similarity between a particular individual and others who have behaved badly in the past. This is particularly so when the analytic tools themselves are not transparent (an issue discussed below), or where it is difficult for individuals to challenge inferences drawn against them (O’Neill, 2016).

 

 

CHALLENGES FOR RESEARCHERS

Being aware of the limitations of particular tools is an important first step for researchers considering employing or critiquing data-analytic tools in criminological research. This section discusses two additional matters that will need to be borne in mind: non-transparency of some of the underlying approaches and difficulties in designing and implementing a proper evaluation.

 

Non-transparency

The most significant barrier for researchers interested in understanding data-driven approaches to predictive policing and offender risk assessment is lack of access to both the algorithms/models employed and the underlying data on which predictions are based. Most commercial software tools keep their methods commercial-in- confidence. While the model underlying PredPol was published in Mohler et al. (2011), the extent to which the original model remains in use is unknown. Even less is known about the models, approaches and algorithms deployed in other commercial software tools.

An additional transparency challenge is obtaining access to data, including data employed in training machine learning algorithms or statistical analysis, and crime


data over time (as a proxy for the change in crime based on use of a particular approach to police deployment).

 

Evaluation

There are two kinds of evaluation that can be conducted on predictive policing and offender risk assessment tools. The first is to test the accuracy of the prediction itself. In the case of offender risk management, this can be done by comparing risk scores with whether the offender is known to have engaged in relevant conduct (offending, breaking bail conditions, violence) over a fixed period of time. For predictive policing, this can be done by comparing the locations of predicted crime to actual events (noting the precision in location and timing). The challenge here is that an evaluation is ideally conducted if one is using the tool (recording the predictions) but not actually intervening – in other words, not using risk scores to make decisions or not focusing police patrols on forecasted hot spot locations. For this kind of evaluation, which assesses predictive accuracy, interventions would introduce bias into the data.

This is easy to see in the context of offender risk management tools. There can be no data on non-compliance with bail conditions of those held in custody pending trial. There can be no data on re-offending rates during periods where people remain in custody. Even though there may be data on whether these people re-offend after they are released, such data does not necessarily reflect the counterfactual of what would have occurred had they been released sooner. For example, a longer period in custody provides greater opportunities for criminal associations to develop. This cre- ates difficulty in evaluating the accuracy of offender risk assessment tools.

The second type of evaluation is to measure the effectiveness of the tool in achiev- ing its purpose (such as reduction in crime, cost savings or reducing recidivism). This requires the evaluation to be done on a programme as implemented. For example, predictive policing software can be operationalized in a sample of locations to test whether it performs better at reducing crime than traditional approaches adopted elsewhere.

A significant challenge for those seeking to evaluate predictive policing programs as operationalized is ensuring high levels of implementation. Police resistance can be a barrier to predictive policing (Perry et al., 2013: 129). Indeed, one attempt to evaluate a predictive policing programme noted the challenge of implementation across multiple districts over a period of time (Hunt et al., 2014: xiii).

There are also ethical issues with both forms of evaluation, similar in many ways to the ethical issues inherent in testing new medical treatments. While the test is under way, the person conducting the test has a ‘treatment’ that is not being deployed to (in this case) prevent crime. In the case of the first type of evaluation, no interven- tion at all is permitted (despite the knowledge gained from the tool). In the case of the second type of evaluation, operationalization is limited to a sample so that there


is a control group or location against which the treated group or location can be compared. Despite these concerns, evaluations are nevertheless necessary, for similar reasons, for clinical tests in medicine – there needs to be proof that a treatment is safe and effective before it is widely adopted.

 

 

Поделиться:





Воспользуйтесь поиском по сайту:



©2015 - 2024 megalektsii.ru Все авторские права принадлежат авторам лекционных материалов. Обратная связь с нами...