Volume 1, Issue 2 • Spring 2012

Table of Contents

Foreword

General Strain Predictors of Arrest History Among Homeless Youths from Four United States Cities

Students’ Perceptions of School Learning Climate in a Rural Juvenile Detention Educational Facility

Transitions of Truants: Community Truancy Board as a Turning Point in the Lives of Adolescents

Family Warmth and Delinquency among Mexican American and White Youth: Detailing the Causal Variables

Polygraph Testing for Juveniles in Treatment for Sexual Behavior Problems: An Exploratory Study

The 10-Question Tool: A Novel Screening Instrument for Runaway Youth

Applying a Developmental Lens to Juvenile Reentry and Reintegration

Commentary: Assessing Client Outcomes in Youth Justice Services: Current Status and Future Directions

Commentary: Assessing Client Outcomes in Youth Justice Services: Current Status and Future Directions

Andrew Day and Sharon Casey
School of Psychology, Deakin University, Geelong, Australia

Andrew Day, School of Psychology, Deakin University, Waterfront Campus, Geelong, Australia; and Sharon Casey, School of Psychology, Deakin University, Waterfront Campus, Geelong, Australia.

Correspondence concerning this article should be addressed to: Andrew Day, School of Psychology, Deakin University, Waterfront Campus, Geelong 3217, Australia; E-mail: andrew.day@deakin.edu.au

Acknowledgments
The authors would like to thank Youth Services and Youth Justice, Department of Human Services, Victoria, Australia, for their contribution to the ideas presented in this paper and, in particular, to Kathryn Anderson, Rebecca Fitzsimons, and the project reference group.

Keywords: Self-report measures, client outcomes, assessment, youth justice, juvenile justice, young offender

Abstract

Youth justice services are increasingly expected to demonstrate that the services and programs they provide lead to measurable outcomes. This paper considers how client outcomes other than recidivism, which are considered important to youth justice service providers, might be conceptualized and reliably assessed. We conclude that there is a need to develop methods of assessment that are consistent with the principles of evidence-based assessment and we make a number of suggestions for the development of practice in this area.

Introduction

Youth justice services1 have been characterized by some as adhering to one of two distinctive models of practice: the “Justice model,” which is concerned with accountability, punishment, and due process; and the “Welfare model,” which is based on administering justice in reference to the best interests of the young person (see Day, Howells, & Rickwood, 2004; Noetic Solutions, 2010; Stephenson, Giller, & Brown, 2007). Recent years, however, may have seen a trend toward convergence, with elements of the ”welfare” model gaining popularity in North America, and increasing pressure for European youth justice systems to use elements of the ”justice” model (Richards, 2011).


1The term ”youth justice” is used in this paper to refer to services offered to children and young adults between 10 and 18 years of age. Other terminology, including juvenile justice and young offender, is commonly used in other jurisdictions.

Despite differences in emphasis, youth justice services typically aim to achieve multiple outcomes for their clients. In addition to justice outcomes, such as improving community safety by reducing rates of recidivism and ensuring compliance with justice orders, youth justice services seek to provide programs and services that address a broad range of social and emotional needs and facilitate the positive development of children and young people (see Hawkins, Letcher, Sanson, Smart, & Toumbourou, 2009). In addition to providing targeted interventions that manage risk in those who are considered to be serious and/or persistent offenders, most juvenile justice services thus also aim to provide interventions that promote social integration.

This paper considers the way in which the success of services in achieving these multiple outcomes might be assessed. The importance of demonstrating that services do deliver their intended outcomes is illustrated by a recent United Kingdom government policy paper entitled ”Breaking the Cycle,” (Ministry of Justice, 2010) which proposes the introduction of a system of ”payment by results” across the criminal justice system. The paper proposes that providers should be paid according to the success they achieve in reducing offending, and that this will be funded by subsequent savings to the criminal justice system. This represents a major shift in thinking (in the United Kingdom, at least) from a system that previously focused on process to one that will rely on outcomes. This paper’s proposal draws, in part, on models emerging from the employment sector, in which providers are paid based on their success in getting the long-term unemployed into sustainable employment. However, any “payment by results” or “pay for performance” system is inevitably based on the extent to which outcomes can be measured in a meaningful and reliable way. Although the United Kingdom model proposes that recidivism should be used as the exclusive outcome measure, and notwithstanding the concerns that have long been expressed about the appropriateness and validity of recidivism statistics (e.g., Lloyd, Mair, & Hough, 1994), especially with juveniles (for a discussion of the limitations of recidivism statistics, see Stephenson, Giller, & Brown, 2007 or Tresidder, Payne, & Homel, 2009), there are grounds to argue that a broader range of short- and long-term outcomes are also relevant in youth justice and should be considered in any evaluation of services.

Any attempt to measure outcomes inevitably involves the collection of data that can be used to determine whether the goals and objectives of a service have been achieved. Outcome measurement relies on the collection of both output indicators (e.g., whether an intervention was provided as planned) and outcome indicators (e.g., whether the objective was achieved). Both outputs and outcomes should be directly linked to inputs, or the activities that the case worker completes with a client in the course of case management. These typically relate to specific forms of intervention, rather than the social or administrative context in which interventions takes place. This is not to say that such contexts are irrelevant or unimportant, but rather that the aim of outcome measurement is to establish the association between client change and the case work activities that are undertaken. In line with most contemporary approaches to evaluation, there is a need to articulate the ”logic” that underpins the model of service delivery (e.g., Scriven, 1998).

Whereas data on a range of output indicators (e.g., the number of service contacts or number of referrals made) are often available, outcome data (at the service level at least) are much less easily accessed. A search of the youth justice research literature reveals that although a number of individual program evaluations have been reported (see Lipsey, 2010), almost nothing has been published regarding service level outcomes in youth justice services. A recent review of United Kingdom youth justice services (National Audit Office, 2010) concluded that the lack of robust information about activities likely to be most effective in preventing offending makes it difficult to assess the success of the service in achieving its goal of reducing recidivism. This is a complex task, and one for which current approaches to client assessment in youth justice are probably poorly suited. It is suggested that there is a need to either complete further validation of current assessment tools or develop alternative approaches to client assessment if outcomes are to be adequately measured.

The Purpose of Client Assessment

The measurement of client outcomes is, of course, only one of a number of possible functions that assessment serves. Three of these are considered next, although clearly any assessment process should be multipurpose, integrated, and coordinated.

First, an important function of assessment in any justice system is client classification. The ability to predict those individuals who, once having entered the criminal justice system, are likely to continue their offending behavior is an important goal for most services that work with offenders, especially given evidence that those who are assessed at higher risk are most amenable to intervention (Lipsey, 2010). Including structured, standardized, formal measures of risk in any assessment has several advantages. Such measures help to ensure that the wide range of factors associated with future offending are properly covered by the assessment process and that any decisions made about client management are open to rational explanation. In addition, these measures provide a more consistent approach to assessment by eliminating the possible biases of individual professionals.

A number of different assessment instruments are available to classify young offenders. These include the Youth Offender Level of Supervision Inventory (Shields, 1993), the Youth Level of Services Inventory (Andrews, Robinson, & Hoge, 1984), the Youth Level of Service/Case Management Inventory (Hoge & Andrews, 2002), the Psychopathy Check List: Youth Version (Forth, Kosson, & Hare, 2003), and the Young Offender Assessment Profile (Youth Justice Board, 2006). These instruments have not been particularly well validated (see Welsh, Schmidt, McKinnon, Chatta, & Meyers, 2008) and are generally unlikely to meet what are considered to be the psychometric standards required for evidence-based assessment (see Hunsley & Mash, 2007; 2008). For example, the average Area Under the Curve (AUC) for juvenile risk assessment tools has been reported as 0.64 (Schwalbe, 2007), below what is generally considered an acceptable level of predictive validity (Dolan & Doyle, 2000). A recent systematic review and meta-analysis by Singh, Grann, and Fazel (2011), however, did find that the Structured Assessment of Violence Risk in Youth (Borum, Bartel, & Forth, 2003) had a high rate of predictive validity.

The problem of instrument validation may relate to some of the difficulties in predicting adolescent (rather than adult) behavior, in terms of the high base rates of re-offending (Leschied & Cunningham, 2000), the heterogeneity of young offenders (e.g., life course persistent offenders and adolescent risk takers; see Ayers et al., 1999), the role that life events and protective factors play in behavior, and the impact of developmental factors on rates of re-offending (as exemplified in the age-crime curve). For example, Van der Put et al. (2011) have recently shown that recidivism tends to be lowest in early adolescence, peak in mid-adolescence, and then diminish in late adolescence (recidivism risk is highest in those aged 14 years). None of the existing classification tools has been calibrated to accommodate these age-related changes.

A second important function of assessment is to accurately identify offender needs at the point of entry to the system. An assessment should screen each client for immediate physical and mental health risks before considering those longer-term areas of need that might then inform the development of a case plan. According to VanBenschoten (2008), although determining an offender’s general risk level is critical to classification, the identification of specific dynamic risks (or criminogenic needs) is the primary basis of case planning. VanBenschoten also argues, however, that available risk/needs tools are often limited in their capacity to inform case plans and can be impracticable or unwieldy for practitioners to use in their daily work. He points to widespread difficulties in implementation and compliance: “If officers don’t see how the risk/needs tool can help them better manage a case, then it relegates the tool to a data gathering instrument for administrators and researchers” (p. 38). This criticism would appear to apply to many of the assessment tools that are currently used in Youth Justice services (see above) and, anecdotally at least, many services have noted the poor quality of recorded data, poor levels of compliance with administration, especially at the end of an order, and a lack of connection between the assessment process and the case plan (Day & Casey, 2011). In our view, many of these problems with compliance and implementation arise as a result of client needs not being assessed in a manner that makes it possible to complete a re-assessment at the end of a service contact (or order) so that feedback about change is available to the practitioner.

Andrews and Bonta (2010) define criminogenic needs as dynamic risk factors, and it is these that serve as the intermediate targets of change in any attempt to reduce the risk of further offending. Together with criminal history, largely a static construct, and the criminogenic domains of procriminal attitudes, associates, and antisocial personality, these represent what are referred to as the “big four” risk factors. The remaining criminogenic needs complete the “central eight” risk factors of criminal conduct.2 Similar lists have been proposed by others. For example, Douglas and Skeem (2005) have identified a number of dynamic risk factors for violence that include impulsiveness, negative affect, psychosis, antisocial attitudes, current substance use, interpersonal relationship problems, and poor treatment compliance. Of these, the majority are covered by Andrews and Bonta’s “central eight.” Although these factors have been shown to apply to both juvenile and adult offender populations, it is likely that the relative weighting of particular factors will vary according to age (e.g., peer group relationships and criminal associates, given that juvenile offending commonly occurs in a social context). Here, the study by Van der Put et al. (2011) is significant in that it suggests that different types of risk factors exert most influence at different ages, with static risk factors becoming increasingly influential as age increases. Whereas younger adolescents are rated as experiencing fewer dynamic risk factors (criminogenic needs), it is the presence of these factors that appears to have the most predictive power.


2The other risk factors are those of social achievement (education, employment), family/marital status (marital instability, poor parenting skills, and criminality), substance abuse, and leisure/recreation activities (or the lack of prosocial pursuits); see Andrews & Bonta, 2010, p.46.

A second set of client needs centers around the task of preparing for adulthood and living independently in the community. Adolescence is widely recognized as a period that involves significant cognitive, psychological, and social transitions (see Burrow, Tubman, & Finley, 2004) in which the adolescent is required to make adjustments in the face of changes in the self, in the family, and in the peer group (Lerner & Galambos, 1998). In early adolescence, young people are required to deal with institutional changes, such as the transition to high school during early adolescence and to work or university during the latter years. These changing relations constitute the basic process of adolescent development and, depending on the adolescent’s developmental history, experience of adversity, and access to social resources, are thought to underlie the positive and negative outcomes that are associated with this period (Lerner, 1993). It thus becomes important to assess service outcomes not only in relation to re-offending rates or changes in criminogenic need, but also in terms of a range of other variables, the most important of which are considered next.

Additional Targets for Change

In addition to addressing criminogenic need, justice agencies also work to improve several key areas in an effort to facilitate the young person’s pathway into adulthood and, hopefully, ameliorate risk factors associated with a transition from adolescent to adult offender. Developmental criminology theorists (e.g., Catalano & Hawkins, 1996; Farrington, 2005; Sampson & Laub, 1997, 2005; Thornberry, 1997) have consistently identified the important role of socializing factors (family, peers, school/work, community) in the onset and maintenance of serious and persistent antisocial behavior during adolescence. The three factors that are perhaps most often the target of interventions by juvenile justice workers are family functioning, involvement with antisocial peers, and engagement with education (see Stephenson et al., 2007).

Parents and primary carers are possibly the most important influential force in a child’s psychosocial development. Although the influence of peers grows and that of parents appears to wane during adolescence, parents retain the ability to influence the values and behaviors of their adolescent children (Allen et al., 2002; Allen, More, & Kuperminc, 1997; Collins & Laursen, 2004). Among the parental and familial factors identified in the literature as influencing delinquency, substance use, and risky sexual behavior are family environment, parenting styles, parental criminality, and the nature of attachment between the parent and child (Barnes, Welte, & Hoffman, 2002; da Silva, Sanson, Smart, & Toumbourou, et al., 2004; Dobkin, Tremblay, & Sacchitelle, 1997; Moffitt, 1993; Mullis, Cornille, Mullis, & Huber, 2004; Rutter, 1997; Turner, Irwin, Tschann, & Millstein, 1993). For example, research into the influence of family composition (the number of parents and siblings living at home) on risk behaviors suggests that adolescents living with two parents are significantly less likely than those living with only one parent to engage in delinquent behavior and substance use and less likely to initiate sexual intercourse at a younger age. Compared with adolescents living with only one parent, adolescents living with two parents are at reduced risk for depression (Barnes et al., 2002; Halfours et al., 2004; Mullis et al., 2004; Turner et al., 1993).

Larger family sizes have also been found to be associated with increased risk for delinquency (Farrington, 1995). Dysfunctional intrafamilial communications, such as conflict, hostility, and emotional distance, have been shown to be significantly related to antisocial behavior and substance use (Bergen, Martin, Richardson, Allison, & Roeger, 2004; Tolan, Guerra, & Kendall, 1995), higher levels of affiliations with antisocial or substance-using peers (Fergusson & Horwood, 1999), and increased likelihood of engaging in delinquent behavior (Chung, Hawkins, Gilchrist, Hill, & Nagin, 2002; Mullis et al., 2004).

Researchers have identified parenting styles, including disciplinary practices, monitoring of children’s activity, family management practices, communication styles, and availability to their children as having either the potential to protect against, or increase the risk of, engagement in risk behaviors (Dobkin et al., 1997; Farrington, 1995; Fergusson & Woodward, 2000). Family management practices, too, may either protect or promote risk among children. These practices include monitoring, setting rules and limits and using discipline (Kosterman, Haggerty, Spoth, & Redmond, 2004), parental monitoring and supervision (that is, the parent knowing where the child is, whom he or she is with, and what he or she is doing; Biglan et al., 1990), and communication style (that is, parents’ ability to communicate with their children openly and to positively deal with issues relating to risk behaviors; Kosterman, Hawkins, Guo, Catalano, & Abbott, 2000).

The importance of peers increases as age increases. A significant goal in Western cultures during the adolescent period is the shift away from parental control to the development of one’s own beliefs, values, and sense of identity or self-concept (Allen et al., 1997). Failure to successfully negotiate such change, for whatever reason, has consistently been implicated in the persistence of antisocial behavior into adulthood (Dishion, Nelson, & Bullock, 2004). Older adolescents spend more time with their peers, form more intimate and significant relationships with them, and receive increased emotional support from them. As a result, the influence of peers on behavior also increases during adolescence. An example of this is provided by Carroll and colleagues (Carroll, Durkin, Hattie, & Houghton 1997; Carroll, Hattie, Durkin, & Houghton, 2001), who found that at-risk and delinquent teens placed primary importance on maintaining an image of rebellious law-breakers, which they pursued as a means of attaining status among their peers. Further, one of the most robust findings with regard to risk behavior is that adolescents whose close friends and/or peers engage in risk behavior are also more likely to engage in that behavior. This has been demonstrated for delinquency (e.g., Ayers et al., 1999; Farrington, 1995; Fergusson & Woodward, 2000; Mullis et al., 2004), substance use (e.g., Fergusson & Horwood, 1997; Kosterman et al., 2000; Parry, Morojele, Saban, & Flisher, 2004), and risky sexual behavior (e.g., Biglan et al., 1990; Garwick, Nerdahl, Banken, Muenzenberger-Bretl, & Sieving, 2004; Jaccard, Blanton, & Dodge, 2005; Jeltova, Fish, & Revenson, 2005).

Finally, an extensive body of criminological literature (e.g., Farrington, 1992; Maguin & Loeber, 1996; Monk-Turner, 1989) has shown that young people not committed to school and who demonstrate low academic achievement have poor school attendance (Katsiyannis & Archwamety, 1999; Thornberry, Moore, & Christenson, 1985), exhibit negative attitudes toward school (Loeber, Stouthamer-Loeber, Van Kammen, & Farrington, 1991; Farrington & Hawkins, 1991), demonstrate school disciplinary problems (Flannery, Vazsonyi, Rowe, 1996), and who are truant or drop out of school (Farnworth & Lieber, 1989) are more likely to engage in delinquent and/or antisocial behaviors than those who do not exhibit these traits. This relationship, which is consistent across genders, also shows that young people with deficient academic skills not only offend more frequently, but also commit more violent and serious offenses and persist in delinquent behaviors longer than young people whose academic performance is age appropriate (Maguin & Loeber, 1996). Moreover, academic deficiencies in late childhood and early adolescence are frequently a precursor for limited life opportunities in later adolescence and adulthood. It follows that the provision of education services to juvenile offenders could have long-lasting positive effects on broader social contexts, including future employment, involvement in community activities, family and peer relationships, and decreased criminal activity (see Stephenson et al., 2007).

Outcomes Measurement

As noted above, you can use the outcomes measurement to determine whether the goals and objectives of a service have been achieved. There are two principal methods by which outcomes can be assessed—the first is to ask clients to rate themselves on a series of domains that they consider important. For example, you may ask a client to rate how she or he experiences family relationships at the outset of a service or intervention, and then again after the service has been delivered. This method is most appropriate for measuring changes that only the client can report on, although it may be possible to think of observable indicators or outputs (e.g., the number of visits to the family) that would allow staff members to rate change. Researchers generally consider Likert-type scales to be the most user-friendly format for self-report measurement (Barnette, 2000; Brannon, 1981) and most suitable for use with people who have poor verbal skills (Davies, Lewis, Byatt, Purvis, & Cole, 2004). Some researchers have suggested that only positively worded items should be used, since younger children have been shown to have difficulties in interpreting negatively keyed items (Marsh, 1986).

While researchers view self-report as the most appropriate method for assessing constructs that are perceptual in nature (e.g., values, attitudes, affective responses), there are a number of threats to the validity of this method. These include the tendency to respond in socially desirable ways, especially when there are issues of secondary gain involved (such as securing early release from custody). Although some researchers have suggested that factors such as these undermine the validity of these assessments, making them inferior to those which are professionally rated (Kroner & Loza, 2001), there is sufficient empirical evidence from studies conducted with adult offenders to justify the use of self-report instruments, even in areas such as risk assessment where issues of secondary gain are perhaps most prominent. For example, Motiuk, Motiuk, and Bonta (1992) compared a self-report measure of risk classification (the Self-Report Inventory; SRI) and the Level of Service Inventory (LSI) and noted moderate to strong associations between subcomponents of the two measures (ranging from 0.41 for leisure/recreation to 0.80 for criminal history). Motiuk et al. (1991) also showed the SRI to be a stronger predictor of misconduct and was as strong as the LSI in predicting reincarceration. More recently, Loza, Loza-Fanou, and Heseltine (2007) considered the issue of deceptive responding using the Self-Appraisal Questionnaire (SAQ), a validated self-report measure designed to predict violent and non-violent recidivism. They found no significant difference on SAQ scores between offenders who completed the instrument for research purposes and those who completed it as a precursor to decisions about release from prison. In fact, the offenders were much more consistent in their response to the SAQ than to items from a measure of socially desirable responding (i.e., deception was low).

A second method is to ask staff members to rate change. Assessors may, for example, have a view on how well the client has participated in an education or employment program and whether this has changed over time. In many ways this is the simplest way to assess outcomes, even though such ratings are based on clinical judgment and there are concerns about the validity (e.g., clinical assessment of risk) and reliability (e.g., do different members of the staff have different ideas, for example, about what constitutes change?) of such ratings. Staff ratings of change may also be influenced by the desire to be seen as an effective practitioner. In such circumstances, it can help to have a set of guidelines that structures how ratings are made. Stewart and Thompson (2004) have summarized some of the literature on human decision-making relating to practitioners’ prediction of risk, which has applicability to the current discussion about how to assess change. These researchers have identified four biases, the first of which is the tendency to underuse base rates when predicting events that are uncommon (which leads to a tendency to overestimate the occurrence of an event). Second, confirmatory biases often prevent practitioners from considering evidence impartially (and lead to a tendency to search for evidence consistent with the conclusion they believe to be correct), while illusory correlations involve the tendency to see two events as being related when they are not, or are related to a lesser extent. Finally, an over-emphasis on the unique characteristics of a case can lead to a tendency to believe that similar cases are quite different and that unique characteristics are better predictors than those that are more common.

Ways forward?

What emerges from this paper is the idea that an evidence-based approach to client assessment is required. Although the term evidence-based assessment has been used in the scientific literature in a number of different ways, Hunsley and Mash (2007; 2008) identify two underlying principles of evidence-based assessment as follows. First, the selection of constructs to be assessed and the assessment process should be guided by scientifically supported theories and empirical evidence that establish important facets of a particular problem or area of need. Second, practitioners should opt for instruments that are psychometrically strong. Whereas assessment measures that are commonly used in youth justice meet the first criteria, they fall short of the second. In addition to evidence of reliability, validity, and clinical utility, measures should also have appropriate norms for norm-referenced interpretation and/or replicated supporting evidence regarding the accuracy of cut-off scores used for criterion-referenced interpretation. This also extends to individual characteristics, with a need for evidence-based assessments to be sensitive to an individual’s age, gender, and ethnicity, as well as specific cultural factors.

Given that youth justice services are likely to come under increasing pressure to demonstrate that they are achieving the outcomes for which they are funded, there appears to be a strong case for developing and validating needs assessment tools that meet the evidence-based assessment criteria and can be used to assess change over time. A number of tools are available, particularly self-report tools, that might be adapted for this purpose (e.g., Inventory of Offender, Risk, Needs and Strengths [Miller, 2006]; General Health Questionnaire [Goldberg & Williams, 1988]; Commitment to Education Scale [James, 2002]; Utrecht Work Engagement Scale [Schaufeli, Bakker, & Salanova, 2006]; Resilience Scale for Adolescents [READ] [Hjemdal, Friborg, Stiles, Martinussen, & Rosenvinge, 2006]). There are, however, a number of additional considerations that also arise, the first of which concerns the demands placed on both the assessor and the young person being assessed. Evidence-based assessment is also concerned with the utility of any assessment (Cohen & Parkman, 1998). This includes such factors as the costs of any assessment and the time taken to administer it. Systemic considerations, most notably time constraints and resource limitations, highlight the need for assessments that are brief, clear, feasible, and user-friendly; that is, outcome measures that are “good enough to get the job done” (Hunsley & Mash, 2008, p.5). The brevity issue is a vital one in the youth justice setting where there are limited resources available, considerable demands on staff time, and there is a need for repeated administrations to detect change over time. There is a need to find a balance between setting criteria that are either too stringent (and rendering assessment a worthless exercise) or too lenient (and thereby undermining the notion of evidence-based assessment) (see Kazdin, 2005).

Another important issue if a triangulated approach to outcomes measurement is to be adopted (i.e., data are collected from multiple sources such as client self-report, staff ratings, and collateral sources) is to ensure that any observed change in ratings is not misinterpreted. The extent to which youth justice clients and professionals have quite different perceptions of whether change has occurred is currently unknown, and this represents an important avenue for further investigation. The judgment about when change is both clinically and statistically significant is also an important issue (Nunes, Babchishin, & Cortoni, 2011) and relates to the program logic that underpins a service contact (i.e., the intended relationship between ‘inputs’ and ‘outputs’ [Pawson, 2006]). It may also be that the amount of change that is possible is constrained by a number of external or systemic factors.

The suggestion for future practice is to develop an assessment tool that incorporates both client self-report and case worker ratings of need in each of the domains in which a particular service seeks to bring about change. The assessment can then be re-administered at the end of an order or service contact to examine the extent to which change has occurred. This will then offer concrete data on individual client change which (when scores are aggregated across groups of young offenders) can directly inform judgments about the effectiveness of a service. While it is relatively easy to identify the type of self-report measure which might be used (see above), the challenge will be to establish the psychometric properties of any new assessment. In our view, the minimum required for this would be a pilot test of the tool to establish: (a) the factor structure and construct validity of the self-report scales used with clients; (b) the reliability of staff ratings; and (c) the extent to which scores on the measure are sensitive to change over time. Work of this nature is currently being undertaken by Youth Justice and Youth Services in Victoria, Australia, although the results of this pilot project have yet to be reported.

A number of other questions arise from this type of approach. For example, an important issue concerns the threshold for determining when significant change has occurred (i.e., What does a reduced score on an outcome measure actually mean in terms of behavior change? How much change should be expected?). It is, therefore, important that further validation includes an examination of the relationship between scores on the assessment measures and longer-term outcomes such as re-offending or re-entry into the justice system. It is also likely that different individuals will have different needs and that some groups (e.g., young girls) will have different needs from others. There may be some outcomes that are specific to the setting in which services are delivered. There are two ways in which a new assessment process might address these issues. The first is to ensure that group-level outcomes are reported in a stratified way (by age group, gender, and culture, for example). The second is to produce individual reports that identify those changes each client has made and that can readily be incorporated into case files. This would allow only those outcomes that are considered relevant to identified needs in the case plan to be interpreted as meaningful. These, and other issues, will require careful thought and extensive testing if they are to be adequately resolved. They lie, however, at the heart of questions about what constitutes effective practice with young people in juvenile justice settings and, in our view, should be the topic of much more debate within the field.

Notwithstanding these issues, the conclusion of this paper is that evidence-based assessment in the provision of professional services is the cornerstone of best practice in most modern health and human service systems and that there is scope to develop this aspect of youth justice service provision in ways that allow client outcomes to be reliably assessed and interpreted. This involves a consideration not only of population-based recidivism statistics, but also an analysis of changes in those areas of individual client need that the service aims to bring about.

About the Authors

Andrew Day, DClinPsy, is a professor of clinical and forensic psychology in the School of Psychology, Deakin University, Geelong Waterfront Campus, Victoria, Australia. He has worked in both juvenile and adult justice settings. Dr. Day’s research centers on the development of effective rehabilitation practices.

Sharon Casey, PhD, is a senior lecturer at Deakin University, Geelong Waterfront Campus, Victoria, Australia.

References

Allen, J.P., Marsh, P., McFarland, C., Jodl, K.M, Boykin McElhaney, Land, D.J., & Peck, S. (2002). Attachment and autonomy as predictors of the development of social skills and delinquency during mid-adolescence. Journal of Consulting and Clinical Psychology, 70(1), 56–66.

Allen, J.P., Moore, C.M., & Kuperminc, G.P. (1997). Developmental approaches to understanding adolescent deviance. In S.S. Luthar & J.A. Burack (Eds.), Developmental psychopathology: Perspectives on adjustment, risk, and disorder (pp. 548–567). New York: Cambridge University Press.

Andrews, D. A., & Bonta, J. (2010). Rehabilitating criminal justice policy and practice. Psychology, Public Policy, and Law, 16, 39–55.

Andrews, D.A., Robinson, D., & Hoge, R.D. (1984). Manual for the Youth Level of Service Inventory. Ottawa, Ontario: Department of Psychology, University of Carleton.

Ayers, C.D., Williams, J.H., Hawkins, J.D., Peterson, P.L., Catalano, R.F., & Abbott, R.D. (1999). Assessing correlates of onset, escalation, deescalation and desistence of delinquent behaviour. Journal of Quantitative Criminology, 15(3), 277–306.

Barnes, G. M., Welte, J. W., & Hoffman, J. H. (2002). Relationship of alcohol use to delinquency and illicit drug use in adolescents: Gender, age, and racial/ethnic differences. Journal of Drug Issues, 32, 153–178.

Barnette, J. J. (2000). Effects of stem and Likert response option reversals on survey internal consistency: If you feel the need, there is a better alternative to using those negatively worded stems. Educational and Psychological Measurement, 60, 361–370.

Bergen, H.A., Martin, G., Richardson, A.S., Allison, R.S., & Roeger, S.L. (2004). Sexual abuse, antisocial behaviour and substance use: Gender differences in young community adolescents. Australian and New Zealand Journal of Psychiatry, 38(1–2), 34–41.

Biglan, A., Metzler, C. A., Wirt, R., Ary, D., Noell, J., Ochs, L., French, C., & Hood, D. (1990). Social and behavioral factors associated with high-risk sexual behavior among adolescents. Journal of Behavioral Medicine, 13(3), 245–262.

Borum, R., Bartel, P., & Forth, A. (2003). Manual for the structured assessment of violence risk in youth (SAVRY): Version 1.1. Tampa: University of South Florida.

Brannon, R. (1981). Current methodological issues in paper and pencil measuring instruments. Psychology of Women Quarterly, 5, 618–627.

Burrow, A.L., Tubman, J.G., & Finley, G.E. (2004). Adolescent adjustment in a nationally collected sample: Identifying group differences by adoption status, adoption subtype, developmental stage and gender. Journal of Adolescence, 27(3), 267–282.

Carroll, A., Durkin, K., Hattie, J. & Houghton, S. (1997). Goal setting among adolescents: A comparison of delinquent, at-risk, and not-at-risk youth. Journal of Educational Psychology, 89(30), 441–450.

Carroll, A., Hattie, Durkin, K. & Houghton, S. (2001). Goal-setting and reputation enhancement: Behavioural choices among delinquent, at-risk and not at-risk adolescents. Legal and Criminological Psychology, 6, 165–184.

Catalano, R. F., & Hawkins, J. D. (1996). The Social Development Model: A theory of antisocial behaviour. In J. D. Hawkins (Ed.), Delinquency and crime: Current theories. (pp. 149–197). New York: Cambridge University Press.

Chung, I., Hawkins, J.D., Gilchrist, L.D., Hill, K.G., & Nagin, D.S. (2002). Identifying and predicting offending trajectories among poor children. Social Service Review, Dec., 663–685.

Cohen, S., & Parkman, H. P. (1998). Esophageal manometry in clinical practice: The need for evidence-based assessment of clinical efficacy. American Journal of Gastroenterology, 93, 2319–2320.

Collins, W.A., & Laursen, B. (2004). Changing relationships, changing youth: Interpersonal contexts of adolescent development. Journal of Early Adolescence, 24(1), 55–62.

da Silva, L., Sanson, A., Smart, D. & Toumbourou, J. (2004). Civic responsibility among Australian adolescents: Testing two competing models. Journal of Community Psychology, 32(3), 299–255.

Davies, K., Lewis, J., Byatt, J., Purvis, E., & Cole, B. (2004). An evaluation of the literacy demands of general offending behaviour programs. London: HMSO Home Office Research, Development and Statistics Directorate 233.

Day, A., & Casey, S. (2011). Youth justice outcomes measurement and reporting. Melbourne, Victoria: Victorian Department of Human Services, Youth Justice and Youth Services Branch.

Day, A., Howells, K., & Rickwood, D. (2004). Current trends in the rehabilitation of juvenile offenders. Trends & Issues in Crime and Criminal Justice, No 285, 1–6.

Dishion, T.J., Nelson, S.E., & Bullock, B.M. (2004). Premature adolescent autonomy: Parent disengagement and deviant peer process in the amplification of problem behaviour. Journal of Adolescence, 27, 515–530.

Dobkin, P.L., Tremblay, R.E., & Sacchitelle, C. (1997). Predicting boy’s early-onset substance abuse from father’s alcoholism, son’s disruptiveness, and mother’s parenting behaviour. Journal of Consulting and Clinical Psychology, 65(1), 86–92.

Dolan, M., & Doyle, M. (2000). Violence risk prediction: Clinical and actuarial measures and the role of the Psychopathy Checklist. British Journal of Psychiatry, 177, 303–311.

Douglas, K. S., & Skeem, J. L. (2005). Violence risk assessment: Getting specific about being dynamic. Psychology, Public Policy, and Law, 11, 347–383.

Farnworth, M., & Leiber, M. J. (1989). Strain theory revisited: Economic goals, educational means, and delinquency. American Sociological Review, 55(2), 236–279.

Farrington, D.P. (1992). Expaling the beginning, progress and ending of antisocial behaviour from birth to adulthood. In J. McCord (Ed.), Facts, frameworks and forecasts: Advances in criminological theory (Vol. 3, pp.253–286). New Brunswick, NJ: Transaction Publishers.

Farrington, D.P. (1995). The challenge of teenage antisocial behaviour. In M. Rutter (Ed.), Psychosocial disturbances in young people: Challenges for prevention (pp 83–130). Cambridge: Cambridge University Press.

Farrington, D.P. (Ed.). (2005). Integrated developmental and life-course theories of offending. New Brunswick, NJ: Transaction Publishers.

Farrington, D. P., & Hawkins, J. D. (1991). Predicting participation, early onset and later persistence in officially recorded offending. Criminal Behavior and Mental Health, 1, 1–33.

Fergusson, D.M. & Horwood, L.J. (1999). Prospective childhood predictors of deviant peer affiliations in adolescence. Journal of Child Psychology & Psychiatry, 40(4), 581–592.

Fergusson, D.M. & Woodward, L.J. (2000). Educational, psychosocial, and sexual outcomes of girls with conduct problems in early adolescence. Journal of Child Psychology & Psychiatry, 41(6), 779–792.

Flannery, D., Vazsonyi, A. T., & Rowe, D. C. (1996). Caucasian and Hiemic early adolescent substance use: Parenting, personality, and school adjustment. Journal of Early Adolescence, 16, 71–89.

Forth, A. E., Kosson, D., & Hare, R. D. (2003). The Hare PCL: Youth Version. Toronto, ON: Multi-Health Systems.

Garwick, A., Nerdahl, P., Banken, R., Muenzenberger-Bretl, L., & Sieving, R. (2004). Risk and protective factors for sexual risk taking among adolescents involved in Prime Time. Journal of Pediatric Nursing, 19(5), 340–350.

Goldberg, D., & Williams, P. (1988). A user’s guide to the General Health Questionnaire. Windsor, UK: NFER-Nelson.

Halfours, D.D., Waller, M.W., Ford, C.A., Halpern, C.T., Brodish, P.H. & Iritani, B. (2004). Adolescent depression and suicide risk: Association with sex and drug behaviour. American Journal of Preventive Medicine, 27(3), 224–231.

Hawkins, M. T., Letcher, P., Sanson, A. , Smart, D., & Toumbourou, J. W. (2009). Positive development in emerging adulthood. Australian Journal of Psychology, 61, 89–99.

Hjemdal, O., Friborg, O., Stiles., T.C., Martinussen, M., & Rosenvinge, J. (2006). A new rating scale for adolescent resilience. Grasping the central protective resources behind healthy development. Measurement and Evaluation in Counseling and Development, 39, 84–96.

Hoge, R., & Andrews, D. (2002). The Youth Level of Service/Case Management Inventory. Toronto, ON, Canada: Multi-Health Systems.

Hunsley, J., & Mash, E. J. (2007). Evidence-based assessment. Annual Review of Clinical Psychology, 3, 57–79.

Hunsley, J., & Mash, E. J. (2008). Developing criteria for evidence-based assessment: An introduction to assessments that work. In J. Hunsley & E.J. Mash (Eds.), A guide to assessments that work (pp. 3–14). New York: Oxford University Press.

Jaccard, J., Blanton, H., & Dodge, T. (2005). Peer influences on risk behaviour: An analysis of the effects of a close friend. Developmental Psychology, 41(1), 135–147.

James, R. (2002). Background and higher education participation: An analysis of school students’ aspirations and expectations. Melbourne, Australia: Centre for the Study of Higher Education, The University of Melbourne.

Jeltova, I., Fish, M.C., & Revenson, T.A. (2005). Risky sexual behaviours in immigrant adolescent girls from the former Soviet Union: Role of natal and host culture. Journal of School Psychology, 43, 3–22.

Katsiyannis A., & Archwamety, T. (1999). Academic remediation/achievement and other factors related to recidivism rates among delinquent youths. Behavioral Disorders 24, 93–101.

Kazdin, A. E. (2005). Evidence-based assessment of child and adolescent disorders: Issues in measurement development and clinical application. Journal of Child Clinical and Adolescent Psychology, 34, 253–276.

Kosterman, R., Haggerty, K.P., Spoth, P., & Redmond, C. (2004). Unique influences of mothers and fathers on their children’s antisocial behaviour. Journal of Marriage and Family, 66, 762–778.

Kosterman, R., Hawkins, J.D., Guo, J., Catalano, R.F. & Abbott, R.D. (2000). The dynamics of alcohol and marijuana initiation: Patterns and predictors of first use in adolescence. American Journal of Public Health, 20(3), 360–366.

Kroner, D. G., & Loza, W. (2001). Evidence for the efficacy of self-report in predicting nonviolent and violent criminal recidivism. Journal of Interpersonal Violence, 16, 168–177.

Lerner, R. M. (1993). A developmental contextual view of human development. In S. C. Hayes, L. J. Hayes, H. W. Reese, & T. R. Sarbin (Eds.), Varieties of scientific contextualism (pp. 301–316). Reno, NV: Context Press.

Lerner, R. M., & Galambos, N. L. (1998). Adolescent development: Challenges and opportunities for research, programs, and policies. Annual Review of Psychology, 49, 413–446.

Leschied, A. W., & Cunningham, A. (2000). Intensive community-based services can influence re-offending rates of high risk youth: Preliminary results of the multisystemic therapy clinical trials in Ontario. Empirical and Applied Criminal Justice Research Journal, 1(1), 1–24.

Lipsey, M. W. (2010). The primary factors that characterize effective interventions with juvenile offenders: A meta-analytic overview. Victim and Offenders, 4, 124–147.

Lloyd, C., Mair, C., & Hough, M. (1994). Explaining reconviction rates: A critical analysis. Home Office Research Study No. 136, London: HMSO.

Loeber, R., Stouthamer-Loeber, M., Van Kammen, W. B., & Farrington, D. P. (1991). Initiation, escalation and desistance in juvenile offending and their correlates. Journal of Criminal Law and Criminology, 82, 36–82.

Loza, W., Loza-Fanous, A., & Heseltine, K. (2007). The myth of offenders’ deception on self-report measure predicting recidivism: Example from the Self-Appraisal Questionnaire (SAQ). Journal of Interpersonal Violence, 22, 671–683.

Maguin, E., & Loeber, R. (1996). Academic performance and delinquency. In M. Tonry (Ed.), Crime and Justice: A Review of Research (Vol. 20). Chicago, Ill.: University of Chicago Press.

Marsh, H. W. (1986). Negative item bias in rating scales for pre-adolescent children: A cognitive-developmental phenomenon. Developmental Psychology, 22, 37–49.

Miller, H. A. (2006). Inventory of Offender Risk, Needs, and Strengths. Lutz, Florida, PAR.

Ministry of Justice (2010). Breaking the cycle: Effective punishment, rehabilitation and sentencing of offenders. London: HMSO.

Moffitt, T. E. (1993). Adolescence-limited and life-course-persistent antisocial behaviour: A developmental taxonomy. Psychological Review, 100(4), 674–701.

Monk-Turner, E. (1989). Effects of high school delinquency on educational attainment and adult occupational status. Sociological Perspectives, 32, 413–418.

Motiuk, M. S., Motiuk, J. L., & Bonta, J. (1992). A comparison between self-report and interview-based inventories in offender classification. Criminal Justice and Behavior, 19(2), 143–159.

Mullis, R.L., Cornille, T.A., Mullis, A.K. & Huber, J. (2004). Female juvenile offending: A review of characteristics and contexts. Journal of Child and Family Studies, 13(2), 205–218.

National Audit Office (2010). The youth justice system in England and Wales: Reducing offending by young people. Report by the Comptroller and Auditor General HC 663 2010–2011. National Audit Office, UK.

Noetic Solutions Pty. (2010). Review of effective practice in juvenile justice. Report for the Minister for Juvenile Justice. Melbourne, Australia.

Nunes, K. L., Babchishin, K. M., & Cortoni, F. (2011). Measuring treatment change in sex offenders: Clinical and statistical significance. Criminal Justice and Behavior, 38, 157–173.

Parry, C.D.H., Morojele, N.K., Saban, A., & Flisher, A.J. (2004). Brief report: Social and neighbourhood correlates of adolescent drunkenness; a pilot study in Cape Town, South Africa. Journal of Adolescence, 27, 369–374.

Pawson, R. (2006). Evidence-based policy: A realist perspective. London: Sage.

Richards, K. (2011). What makes juvenile offenders different from adult offenders. Trends and Issues in Crime and Criminal Justice, 409, 1–8.

Rutter, M.L. (1997). Nature-Nurture integration: The example of antisocial behaviour. American Psychologist, 52(4), 370–398.

Sampson, R.J., & Laub, J.H. (1997). A life-course theory of cumulative disadvantage and stability of delinquency. In T.P. Thornberry (Ed.), Advances in criminological theory (Vol. 7): Developmental theories of crime and delinquency (pp.133–162). New Brunswick, NJ: Transaction Publishers.

Sampson, R.J., & Laub, J.H. (2005). A life-course view of the development of crime. The Annals of the American Academy of Political and Social Science, 602(1), 12–45.

Schaufeli, W.B., Bakker, A.B., & Salanova, M. (2006). The measurement of work engagement with a short questionnaire: A cross-national study. Educational and Psychological Measurement, 66, 701–716.

Schwalbe, C. S. (2007). Risk assessment for juvenile justice: A meta-analysis. Law and Human Behavior, 31, 449–462.

Scriven, M. (1998). Minimalist theory of evaluation: The least theory that practice requires. American Journal of Evaluation 19, 57–70.

Shields, I. (1993). The use of the Young Offender-Level of Service Inventory (YO-LSI) with adolescents. IARCA Journal, 5,(10), 26.

Singh, J. P., Grann, M., & Fazel, S. (2011). A comparative study of violence risk assessment tools: A systematic review and metaregression analysis of 68 studies involving 25,980 participants. Clinical Psychology Review 31, 499–513.

Stephenson, M., Giller, H., & Brown, S. (2007). Effective Practice in Youth Justice. Cullompton: Willan.

Stewart, A., & Thompson, C., (2004). Comparative evaluation of child protection assessment tools. Queensland: Griffith University.

Thornberry, T.P. (1997). Introduction: Some advantages of developmental and life-course perspectives for the study of crime and delinquency. In T.P. Thornberry (Ed.), Advances in criminological theory (Vol. 7): Developmental theories of crime and delinquency (pp.1–10). New Brunswick, NJ: Transaction Publishers.

Thornberry, T., Moore, M., & Christenson, R.L. (1985). The effect of dropping out of high school on subsequent criminal behavior. Criminology, 23, 3–18.

Tolan, P.H., Guerra, N.G., & Kendall, P.C. (1995). A developmental-ecological perspective on antisocial behaviour in children and adolescents: Toward a unified risk and intervention framework. Journal of Consulting and Clinical Psychology, 63(4), 579–584.

Tresidder, J., Payne, J., & Homel, R. (2009). Measuring Youth Justice Outcomes. Canberra: Australian Institute of Criminology.

Turner, R.A., Irwin, C.E., Tschann, J.M., & Millstein, S.G. (1993). Autonomy, relatedness, and initiation of health risk behaviours in early adolescence. Health Psychology, 12(3), 200–208.

VanBenschoten, S. (2008). Risk/needs assessment: Is this the best we can do? Federal Probation, 72, 38–42.

Van Der Put, C. E., Dekovic, M., Geert, J. J., Stams, M., Van Der Laan, P. H., Hoeve, M., & Van Amelsfort, L. V. (2011). Changes in risk factors during adolescence: Implications for risk assessment. Criminal Justice and Behavior, 38, 248–262.

Welsh, J. L., Schmidt, F., McKinnon, L., Chattha, H. K., & Meyers, J. R. (2008). A comparative study of adolescent risk assessment instruments: Predictive and incremental validity. Assessment, 15, 104–115.

Youth Justice Board (2006). Asset: Young Offender Assessment Profile. Available from http://www.justice.gov.uk/guidance/youth-justice/assessment/asset-young-offender-assessment-profile.htm (Accessed 3 February2012).

OJJDP Home | About OJJDP | E-News | Topics | Funding
Programs | State Contacts | Publications | Statistics | Events