Talking About Things Important to Me: Mental Health Consumers' Experiences of Consumer-Rated Measures
Funding: This work was made possible by a small grant provided by the Australian Mental Health Outcomes and Classification Network (AMHOCN).
ABSTRACT
Since 2002, National Outcomes and Casemix Collection of clinician-rated and consumer-rated outcome measures has become part of routine care within Australian clinical mental health services, aiming to ensure that services understand, improve and are accountable for effectiveness of treatment and care provision. Consumer-rated outcome measures, implemented well, support basic human rights of consumers to be asked, heard and included equally in their own care. However, their use has lagged due to clinician inertia, uncertainty about their value to clinical care, assumptions about consumers' capacity to complete the measures and organisational cultural issues that have hampered more holistic assessment, consumer inclusion and care collaboration. Much is known about negative, largely tokenistic use of such measures, poor uptake and dominance of clinical approaches to measurement that privilege clinical expertise; however, little is known about consumers' positive experiences of using consumer-rated measures, Therefore, our aims were as follows: to seek the views and experiences of mental health consumers of using consumer-rated measures in their encounters with clinicians; to understand better whether there were benefits (and if so what) of consumer-rated measures being used in routine mental health practice; to understand how feedback on the use of consumer-rated measures can inform training for mental health staff; and to promote their wider use within mental health services. In-depth interviews conducted with 10 Australian mental health consumers used interview questions co-designed with lived experience and clinical advocates. Descriptive thematic analyses produced four themes emphasising consumers' preferences for completing the measures, the importance of explaining their purpose, how the process validated their feelings and was an opportunity for self-reflection, sense-making, trust-building, and transparency in the encounter and empowerment. This research offers recommendations about the value of effective implementation of consumer-rated measures.
1 Background
The National Outcomes and Casemix Collection (NOCC) was commenced in mental health services in 2002. Its ongoing development and review, and support for its implementation through training delivered to the mental health workforce nationally, is the responsibility of the Australian Mental Health Outcomes and Classification Network (AMHOCN) (AMHOCN 2022). The NOCC measures are mandatory and have become part of routine care, aiming to ensure that mental health services continue to understand, improve and be accountable for the effectiveness of the care that they provide to consumers (the term used predominantly in Australian policy and practice to denote people with mental health challenges, also known as patients or service users).
The NOCC comprises a set of mandatory clinical measures that clinicians in mental health services across Australia complete as part of routine treatment and care of mental health consumers, on admission, at review and transitions from care. These measures are not completed with consumers' active input. They include: the Health of the Nation Outcome Scales (HoNOS) (Wing et al. 1998); Life Skills Profile 16 (LSP-16) (Buckingham et al. 1998); and Phase of Care (PoC) (Independent Hospital Pricing Authority 2016). The consumer-rated outcome measures (CROMs) (i.e., those given to the consumer to complete) are, depending on the state and territory, one of either the: Mental Health Inventory (MHI-38) (Veit and Ware 1983); Behavior and Symptom Identification Scale 32 (BASIS-32) (Eisen, Dill, and Grob 1994); or Kessler 10 Plus (K10+) (Kessler et al. 2002). The Strengths and Difficulties Questionnaire (SDQ) (Goodman 1997) is the CROM offered in child and adolescent mental health services in all Australian states and territories.
- Be meaningful to consumers, carers, clinicians and the community.
- Provide a consistent approach to measurement regardless of where the consumer receives their care.
- Be embedded in a national minimum dataset.
- Provide timely and meaningful information in formats that can be understood’ (National Mental Health Information Development Expert Advisory Panel 2013, 11).
The National Framework for Recovery-Oriented Mental Health Services (Commonwealth of Australia 2013) calls on clinicians to provide respectful, person-centred relationships, practices and service environments that promote general health and well-being outcomes. The use of standard CROMs, such as those used in the NOCC, may provide an opportunity for the delivery of open and transparent care in which the consumer's voice is clearly heard.
Collection and use of CROMs, done well, can support the basic human rights of consumers to be asked, heard and included equally in their own care (WHO 2021). While some have questioned their value (Happell 2008) and the deficiencies of what is being measured (Oster et al. 2023), the importance and usefulness of CROMs in striving to improve services to be more recovery-oriented has been noted (Roe, Slade, and Jones 2022). When implemented well, such measures may aid consumers' self-reflection and sense of ownership in the therapeutic process when interacting with clinicians (Rognstad et al. 2023). However, offering and collection of CROMs has lagged behind the routine uptake of clinician-rated measures, even though these measures were chosen by consumers as part of consultation processes in Australian states and territories as part of establishing the NOCC. Cultural and structural barriers to why this is the case are largely known. They include clinician inertia, uncertainty about their value to clinical care, concerns about consumers' capacity to complete the measures, stigma surrounding paternalistic practices, perceived lack of time or inconvenience to complete the CROM weighed against other administrative commitments; and organisational cultural issues that have hampered more holistic assessment and consumer collaboration in care (National Mental Health Information Development Expert Advisory Panel 2013; Gelkopf, Mazor, and Roe 2022). Trauer, Callaly, and Herrman (2009) found that ease of use of the measures was reported to be significantly higher by mental health clinicians who had been trained in their use, and that those who had been exposed to consumer feedback said they found the measures more valuable and easier to use.
Less is known about experiences of providing the CROMs to mental health service consumers, from the consumer perspective; what makes them helpful, and why they may be helpful for some consumers but not others (Solstad et al. 2021). A small number of early studies involved consumers who reported that they thought it helped the clinician understand them better, and resulted in better care (Guthrie et al. 2008; Black et al. 2009). A number of recent reviews have focused on evidence from quantitative and qualitative studies with the aim of understanding this topic in more detail. A systematic review to assess the effects of routine measurement and feedback of the results of CROMs (Kendrick et al. 2016), focusing on 17 randomised controlled trials (RCTs), found no evidence of a difference in mental health symptom and treatment outcomes between the feedback and non-feedback groups. However, a more recent systematic review building on that work (Rognstad et al. 2023), involving 39 RCTs, found that feedback on CROMs had a small positive effect on treatment outcomes, particularly for consumers whose treatment was assessed as ‘not-on-track’ for achieving its goals. A meta-analysis of both RCTs and non-RCTs (de Jong et al. 2021) found a small positive effect of CROMs on symptom reduction, though this and Rognstad et al.'s review acknowledged significant heterogeneity and problems with quality in the included studies. A systematic review and synthesis of 16 qualitative studies on consumers' experiences of CROMs in mental health services (Solstad, Castonguay, and Moltu 2019) identified a complex and nuanced set of experiences arose from this process, including positive and negative implications for trust and consumer empowerment and collaborative practice. More recently, a systematic review and meta-analysis of qualitative studies of therapists' and consumers' experiences of CROMs (Låver et al. 2024) confirmed several relational processes arose beyond CROMs' uses for assessment, monitoring and treatment planning; these included their role in facilitating communication, enhancing therapeutic alliance and empowering consumers.
- To seek the views and experiences of mental health consumers of using CROMs in their encounters with clinicians.
- To understand better whether there were benefits (and if so what) of CRoMs being used in routine mental health practice.
- To understand how feedback on the use of CROMs can inform training for mental health staff, to promote their wider use within mental health services.
2 Methods
2.1 Design
The current study used a descriptive qualitative methodology. This is an appropriate method when little is known about the phenomena of interest and when the research involves a small sample (Liamputtong 2019). In order to understand consumers' positive experiences of CROMs, interviews that sought their opinions seemed the most appropriate approach. The paper was reported according to the COREQ Checklist for qualitative studies.
2.2 Interview Guide Development
In December 2021, a group of consumer and clinical advocates (n = 10 plus facilitation by the Lived Experience project officer) with interest and expertise in outcome measures was purposefully gathered to participate in a focus group to help design the interview guide for the study. The focus group comprised a range of stakeholder groups who could provide both lived experience perspectives and clinical perspectives on the use of the measures, to build a comprehensive picture of issues to consider such as intention, process, rationale and impact of using the measures. During the focus group discussions, if differences in viewpoints arose in content and wording of interview questions, lived experience perspectives were privileged (see Table 1 for details).
Role | Justification |
---|---|
An experienced facilitator with mental health lived experience as a consumer (project officer DJ) | This was important to ensure that a lived experience focus underpinned and led the discussion, and that issues of power (such as privileging of traditional clinical dominance) were addressed and set aside |
Two people with lived experience as consumers, and experience in advocacy for mental health reform | The perspectives of consumers were central and crucial to the discussions, as people who complete the measures |
Two people with lived experience as family carers, and experience in advocacy for mental health reform | The perspectives of family carers provided further critical perspectives of the use and potential impacts of the use of the measures (positive or negative) on the lives of their family members and their wider interpersonal network of supports |
An occupational therapist, psychiatrist, and psychologist | The perspectives of clinician disciplines that commonly use the outcome measures in their practice provided insights into the current use of measures, including expectations of their use, that might inform the interview questions |
Two members of AMHOCN (Research team members RD and TC) | Largely observers who were available to clarify any questions that the group might have during the focus group discussions, if needed |
Lived experience researcher (project lead SL), with lived experience as a consumer and family carer | Largely observer who was able to help balance power sharing processes as part of the group discussions by ensuring the consumer perspective was central to the group's summations about the interview questions to be included |
Lived experience contributors | n = 6 |
Clinical contributors | n = 3 |
Observers | n = 2 |
Total | n = 11 |
Focus group members met twice to discuss and refine the interview guide questions which were then pilot tested with a consumer volunteer and minor revisions to wording made based on their feedback. The interview guide covered main areas of interest (Table 2).
1. General experience with the measures |
1.1 Have you ever been asked to complete a questionnaire like this one? 1.2 How often? 1.3 Who usually offered it to you? 1.4 What did they say when they offered it to you? 1.4.1 Did they tell you why you were being asked to complete it? 1.4.2 Did they tell you about what will be done with the questionnaire after you have completed it? 1.5 Once you had completed it, did anyone go through your responses to the questions and talk about them with you? |
2. Clinician skills |
2.1 What do you think about the way the clinician offered the questionnaire to you? 2.2 Did what they said and did when they offered the questionnaire(s) give you some sense that this was a useful thing for you to do? 2.3 Did you get the impression that they valued your responses? 2.4 Did you have a chance to discuss any questions or concerns you might have had about what would be done with the completed questionnaires or what use might be made of the information you had provided? 2.5 Was the clinician able to address those questions or concerns? |
3. How could these measures support your recovery? |
3.1 What does recovery mean for you? 3.2 How do you think the completion of the measures could assist you with that? 3.3 Did answering the questions lead you to think about your current situation any differently? 3.4 Did you then make any changes or thought that you might do so? 3.5 Did you feel your responses helped the worker to understand you any better? 3.6 Did the worker take notice of the responses at all (did the worker make any particular comments and body language to indicate what they thought)? |
4. Assessment of change |
4.1 Did you see any evidence that clinicians were using the measure(s) to gauge change? 4.2 To what extent do you think clinicians were aware of or concerned whether your scores on the measures had changed for the better or worse in any way that particularly mattered to you? |
5. Any other comments? |
2.3 Recruitment of Interview Participants and Sample
Recruitment of interview participants involved an invitation to mental health consumers across Australia by Lived Experience Australia within its monthly e-newsletter to its 4000+ national subscribers. The invitation specified that we were seeking participants who could speak to any positive experiences of completing outcome measures, recognising that they may also have experienced negative experiences. Ten participants were purposefully drawn by the first two authors from the first 20 individuals who expressed interest, The sample selection criteria and the final included sample reflected diverse age and gender, and consumers of child and adolescent, adult and older person public sector mental health services who had recent experience (within the past 2 years) of being offered and/or completing CROMs as part of their care.
2.4 Data Collection
The project officer conducted in-depth interviews with 10 consumers via Zoom or telephone between December 2021 and March 2022. Interviews lasted between 45 and 70 min and were professionally transcribed to ensure accuracy and timeliness for analysis. The authors met as a group early in the data collection process (after the first and third interviews were completed) to discuss and reflect on the interview process and preliminary data. This served to affirm the interview guide questions and supported the Lived Experience project officer in their role as interviewer for the remaining interviews.
2.5 Data Analysis
We used a descriptive thematic analysis approach, as described by Braun and Clarke (2021), that included a structured approach to coding, with multiple coders working independently to apply the coding frame to the data, then together, to manage any potential bias and improve coding reliability through group discussion to determine final coding and themes through consensus. Robust group discussions enabled the research team to reflect on and gain an in-depth understanding of participants' experiences of using the measures. This process led to data saturation after three iterations. This was the point at which data did not lead to any new codes or emerging themes (Saunders et al. 2018). We adapted Braun et al.'s (2019) steps (see Box 1 below) to maximise whole-group discussions for coding and tentative theme and subtheme generation. This also acknowledged and supported the research team's shared learning within an inclusive process.
BOX 1. Thematic analysis steps.
- Familiarisation—The authors each independently read and re-read the transcripts, noting initial patterns, their potential meanings and preliminary coding ideas.
- Generating initial codes—The authors then met as a group to discuss what they had noted and determined initial codes which they then clustered into tentative categories and preliminary themes, based on similarity across the data. This included written and pictorial representations of ideas which formed a preliminary coding structure.
- Formulating tentative themes and subthemes—The first and second authors met to re-code the entire dataset using the preliminary coding structure, then met with the other members of the research team, to discuss and revise the tentative themes and formulate preliminary descriptive content for the themes. Codes were sorted into the themes and subthemes according to similarity in patterns and meaning suggested by the findings.
- Finalisation of themes—The research team then met as a group to discuss and recheck the themes, re-reading the interview transcripts and coding where any discrepancies or queries arose. Through group discussion, the research team made any adjustments to the themes and subthemes as needed (leading to only minor changes), as part of reaching consensus on the final theme structure.
2.6 Ethics
The data analysis for this project was approved by the Flinders University Human Research Ethics Committee (No. 5007). Participants provided written consent for the interviews. No personally identifying information is displayed; we have used [participant number] after each quote to ensure their anonymity.
3 Results
Ten consumers with lived experience of completing outcome measures when receiving services from clinical public mental health services were interviewed. Participants varied in age, gender, location and mental health conditions. Four main themes were derived from the data analysis. These are described below, with the voices of consumers (through direct quotes) provided to highlight their experiences in detail. Detailed results can be found in Data S1.
3.1 Theme 1: Think About What Is Going On for Me When Using the Measure
All participants spoke about the challenges they faced when completing the measures and how their personal situation and preference impacted on their completion of the measures. Three subthemes were identified: (3.1.1) Wait for the right time to do the measure; (3.1.2) I have preferences; (3.1.3) Help me at a time when this is difficult.
3.1.1 Wait for the Right Time to Do the Measure
If I am presenting in crisis or if I'm having lots of suicidal thoughts that then becomes the priority, and interestingly only recently I was really struggling, and she got me to do the forms—she sent them to me out of session via email. She said I am not going to give you the forms now because you're in a state of distress and crisis. She is like ‘my main priority is to help you through this crisis and when you are feeling up to it could you please fill out these forms or send them to me via email’. [2]
3.1.2 I Have Preferences
I've found myself to be a lot more comfortable in a non-clinical setting with all my health supports across the board and I am much more comfortable in my GP's office where I know the environment. [8]
3.1.3 Help Me at a Time When This Is Difficult
I appear very high functioning and so I like, people just, you know, assume that I can engage with things that are generally quite near typical or like the traditional form of filling out a survey with pen and paper. But sometimes … they don't see the panic attack that's going on and me not being able to read the one sentence like it's just. Yeah, it's a lot. [3]
3.2 Theme 2: Explaining the Purpose Is Important
Participants described how clear explanation of why the measures were used, and the related results, in ways that they could understand influenced how they engaged with those clinicians. Two subthemes were apparent: (3.2.1) Not just an administrative exercise; and (3.2.2) Rapport and explaining enhance safety and trust.
3.2.1 Not Just an Administrative Exercise
I think the clinician has to have, I wanna feel from the clinician that they want to be there that they're not just doing their job. I don't want to come in like [Mechanical manner] … I think kindness comes with honesty, integrity. Uhm, sincerity. [7]
Participants appreciated when the clinician was mindful of the way they discussed the outcomes (e.g., not expressing shocks at suicidal thoughts and further discuss the matter with the consumer).
3.2.2 Rapport and Explaining Enhance Safety and Trust
I think it can be quite emotional and traumatic for people to be confronted with the questions on the form. And like we both said actually about getting to know the person as the first thing before giving them the form … With that safety comes honesty … if it's a safe space and you trust that person then you're likely to be more honest. [8]
3.3 Theme 3: A Helpful Tool to Improve My Experience
Many participants described how completing the measures was a helpful start in giving some order in the confusion or chaos of how they were feeling at that time. Participants also confirmed that using the measures was beneficial at multiple other levels. Findings generated four subthemes: (3.3.1) Being honest with yourself and others; (3.3.2) Validate feelings and reinforce diagnosis; (3.3.3) An empowering tool that helps record-keeping and promotes personal recovery; and (3.3.4) An ongoing self-care tool.
3.3.1 Being Honest With Yourself and Others
I think like, especially when it was the more severe time, it was like, oh, you know this is bad, like this is um yeah, it's kind of like you want to select the lower number, but being honest with yourself is like the only way to get better you know? [6]
For some participants, it helped them to be more honest than they might otherwise have been, whereas other participants also emphasised the importance of feeling safe to provide accurate responses. Across the variety of comments, what was apparent was how each participant tossed over the decisions in their mind about how they should answer the questions, and how they actively thought about how their responses would be received by the health professional, being concerned about coercive treatment if they did disclose their true level of distress, and what the consequences of being truthful or not might be.
3.3.2 Validate Feelings and Reinforce Diagnosis
… [it helped you appreciate that you had depression] Yes… he was putting it all kind of into context and sort of giving a reason for why. So, I guess as much as it brings you down, there's an explanation for it, so it's not this ‘why am I feeling so crappy?’ it's well this is why. Yeah, so it's, everything is kind of a mix, it's good and bad … but I think it wasn't such a surprising big thing, it was more a relief to say yes, I feel like this, I can stop pretending … all these things are stuff that you generally try and hide … It was kind of a relief to say, ‘yeah I'm not doing well in these places’ … I knew I was going there for help, so it was just part of that. [1]
3.3.3 An Empowering Tool That Helps Towards Recordkeeping and Promote Personal Recovery
[What would you say to new clinicians about the importance of using these surveys or measures?] For me it's a barometer of how the patient is faring in terms of their current level of pain or a trauma etc … it gives a benchmark … It's to see what the recovery process might look like going forward, so when it's repeated to see if the K10 it's actually working … with ongoing support with recovery … It's part of the process. [4].
3.3.4 An Ongoing Self-Care Tool
I have used it for myself, personally at home without a professional, just as a way of zoning in on my own mental health … I am also able to use it to flesh out … in a diary or with your email, when I want to send to one of my support team to really tell them how I'm feeling. [9]
3.4 Theme 4: An Opportunity to Take Things Further
Several participants expressed the desire for a shared dialogue with the health professional and having their support at-hand in case it was needed as part of the process of completing the measures. Two subthemes were identified: (3.4.1) Further discussion between the consumers and clinicians, and (3.4.2) Enhancing communication, collaboration and teamwork across services.
3.4.1 Further Discussion Between the Consumers and Clinicians
She [GP] also … questions certain aspects of it, and she takes it quite seriously. I mean, it's part of the process … to get the support that one requires. But she takes the question seriously. She takes the outcomes of the question seriously … she compares it to the previous occasion or occasions. I will make a comment ‘Wow that's higher than previous’. You know something like that, she takes it seriously … Using it for, not just getting to know you better … but also getting to know when to refer on clinically … and that's why she does it quite frankly. It's so, you know, she knows for her it's the next step in the process. It's probably the first step in the process of getting the required support. [4]
3.4.2 Enhancing Communication, Collaboration and Teamwork Across Services
I kind of trusted them to share the information … They were very transparent about it. I think there was even like a signing of the document to authorize, you know? But I also kind of liked the idea of my GP and my psychologist collaborating in that way because when I was in the state that I was initially when I was seeking help, it was good to not have to have to chase that up and be that middle person. It definitely cut out the double handling of things. [6]
However, some participants also expressed awareness and concern for the potential inherent power imbalance between them and clinicians as part of help-seeking and the process of completing the measures. They described this arising from the limitations of the measures which they saw as largely determined by clinicians and coming from a clinical perspective, not holistic.
4 Discussion
This qualitative study had three aims. This section will discuss the key findings within those aims.
Views and experiences of mental health consumers of using the NOCC consumer-rated measures in their encounters with clinicians: Most consumers described NOCC CROMs as useful, which is not surprising given the focus of recruitment to this study. They explained that these tools empowered them towards self-care and personal recovery, especially when given the opportunity to drive the process. These findings are consistent with the broader literature (Gelkopf, Mazor, and Roe 2022; Solstad, Castonguay, and Moltu 2019). Burgess et al. (2017) explained that when the measures are filled out by consumers alone, they better reflect social situations due to their lived experience, potentially giving the clinician a richer insight into the consumer's perspective. It is clear that not using the measures as an opportunity to facilitate a shared dialogue with the consumer is a missed opportunity for clinicians to address traditional power differences and to promote recovery-oriented care.
To understand better the benefits of the consumer-rated measures being used in routine mental health practice: Consumers explained that NOCC CROMs validated their feelings and reinforced the diagnosis. Many saw the tool as an opportunity to further discuss important health matters with clinicians. Solstad, Castonguay, and Moltu (2019) and Låver et al. (2024) described how CROMs allowed consumers to express themselves when sharing their thoughts and emotions may have been more difficult, enhancing their sense of control. A recent review recorded similar findings and described this as a monitoring tool and a way to improve the quality of care (Gelkopf, Mazor, and Roe 2022). Some consumers further informed how the tool enhanced team communication and collaboration when shared with their other health providers; this is also supported by global evidence (Solstad, Castonguay, and Moltu 2019).
It seems clear that the greatest benefits from the NOCC CROMs arise when they are used routinely as a collaborative tool between the clinician and consumer. When not used in this way, several concerns were noted by participants, and these have been confirmed in the existing literature. Lakeman (2004), for example, asserted that routine outcome measures, ‘only provide a crude and narrow lens through which to witness recovery. It has only a limited capacity to capture the richness of people's recovery journeys or provide information that can usefully inform care. Indeed, in its implementation [clinicians] may be required to collude in practices or account for practice in ways which run counter to the personal recovery paradigm’ (p. 210). Lakeman argued that outcome measures should be viewed instead as an opportunity for clinicians to use critical reflection and more meaningful engagement with consumers. Likewise, Låver et al. (2024) emphasised that consumers' experience of CROMs involved significant intrapersonal processes that could be positive or negative, and that this active use of CROMs is poorly understood. These arguments remain current and were clearly reflected in the study participants' comments emphasising the importance of implementing CROMs as both process and outcome tools to aid the more meaningful connection between consumer and clinician, and as part of consumers' self-care.
Lucock et al's. (2015) UK investigation of barriers and facilitators of effective implementation of an outcome monitoring and feedback system for clinicians in a psychological therapy service found low levels of therapist actions resulting from the feedback. This included discussing the feedback in supervision and with consumers. They concluded that the barriers were largely administrative and recommended that ‘Systems involving inputting by [consumers] on hand-held devices or online and linked to therapists' computers provide efficient and prompt feedback to therapists and this is clearly the way forward’ (p. 633).
Our findings suggest that administrative barriers are only one part of the issue in understanding clinician behaviour towards the use of CROMs. Some clues are suggested by Oster et al. (2023) who noted that clinicians may find routine outcome measures intrusive to their practice because they threaten their autonomy and clinical judgement. They also argued that a focus on routine outcome measures as key performance indicators, ‘has the potential to shift the focus of care from the client to the measure, such that health professionals are accountable to the measure rather than the care they provide’ (p. 31—see also Trauer, Callaly, and Herrman 2009). Solstad, Castonguay, and Moltu (2019) and Solstad et al. (2021) also raised this concern, but from the consumer perspective, noting consumers' concern that use of routine outcome measures can become a mere bureaucratic exercise that is tokenistic and undermines the establishment of a therapeutic relationship with clinicians. Låver et al. (2024) further noted negative impacts for some consumers who reported that CROMs disturbed the therapeutic work and alliance, and the importance of listening to consumers who express negative experiences.
To understand how feedback on the use of consumer-rated measures can inform training for mental health staff, to promote their wider use within mental health services: Study participants made some suggestions to improve the data collection process. For instance, many suggested that explanation of the data collection purpose enhanced trust and safety within their encounters with the health professional. This is particularly important for demonstrating a trauma-informed approach to care which is foundational to recovery-oriented practice (Commonwealth of Australia 2013). This helped consumers to feel safe to disclose and be more likely to provide honest answers and engage with the process better. Consumers also requested for multimodal data collection process (e.g., options to fill out measures alone/with clinicians, during session/at home, at GP's office/Mental Health clinic). Solstad, Castonguay, and Moltu (2019) also had similar findings where they asserted that consumers wanted flexibility and engaged better when clinicians explained the reason for data collection.
While the current study's consumers did not express any major concerns about the number of measures or length of questionnaires, a recent review that synthesised 103 papers showed numerous different measures globally, and discrepancies between and within countries (Roe, Mazor, and Gelkopf 2022). The authors have suggested to make the tools more consistent worldwide so that implementation is more regular and sustainable. We expect that this process will reduce both consumer and clinician burden (in terms of time and administrative fatigue) and allow clinicians to focus on quality care as the current study's consumers also desired. We would support this call for consistency nationally with a single consumer self-rated measure for adults and older persons, like Australia has a nationally consistent measure for children and adolescents.
The NOCC measures do not collect extensive data on social determinants of mental health (Gelkopf, Mazor, and Roe 2022; Oster et al. 2023) or issues such as social poverty (e.g., food insecurity, housing instability) which is strongly advocated by many researchers (O'Brien 2019; Oster et al. 2023; Solstad, Castonguay, and Moltu 2019). Solstad, Castonguay, and Moltu (2019) found that consumers wanted to be involved in defining their own outcomes. This is a missed opportunity to gather insight into consumer priorities and holistic, psychosocial concerns that may be impacting on their lives. Further enhancements to the NOCC are therefore required.
The Australian National Standards for Safety and Quality in Health Care call for consumers and carers to be actively involved in designing, developing, implementing and evaluating process and outcome monitoring systems in mental health services. This is important to avoid what Greenhalgh et al. (2018) describe as tunnel vision where ‘negative outliers’ are prioritised; they propose all-inclusive policy so that all consumers feel able and supported to fill out the measures. Solstad, Castonguay, and Moltu (2019) also noted some consumers' distrust of bureaucracy and paperwork, and their suspicion towards service providers motives for using consumer-rated and clinician-rated measures. We argue that the future development of NOCC can only occur with active consumer and carer involvement and trust.
Clinicians should be encouraged to view the NOCC CROMs an opportunity to engage actively with consumers in dialogue about the questions they contain, their impact on consumers, and how consumers respond, especially given such measures are often framed within a deficits model of mental health (Oster et al. 2023; Cresswell et al. 2018). A significant effect of only using the NOCC clinician-rated measures is that it positions consumers as passive recipients of mental health care through which these deficits are measured and ‘fixed’ (Oster et al. 2023, 5) and sustains stigmatised attitudes towards them (Walsh and Foster 2021). Without training to help clinicians recognise the value of NOCC CROMs as process tools for shared clinician/consumer dialogue, clinicians are likely to inadvertently or otherwise reinforce the power structures that consumers attest limit their engagement in mental health care, and which can undermine their human rights (UNCRPD 2006).
This research has shown that there are a range of conditions and contexts that may enhance or undermine consumers' positive experiences of routine outcome measurement. The results identify potential improvements for the use of the NOCC CROMs in mental health services. This has resulted in the following six recommendations (Table 3).
One | Clinicians should actively seek the opportunity to complete and engage in a conversation with consumers around the consumer-rated measures and their implications |
Consumers in this study indicated that, by completing the consumer-rated measures, they felt that they were actively engaged, and it gave them a voice in the assessment process | |
Two | Clinicians should explain the purpose of the measures and identify the consumer's preferred method of completion of the consumer-rated measures, as one size does not fit all |
Given their positive experiences of completing consumer-rated measures, consumers in this study indicated that the consumer-rated measures should be offered to all consumers. They reflected on a variety of experiences when completing the consumer-rated measures and indicated different preferences for when and how they might complete the measures. Some expressed a desire to complete a measure privately, while others saw advantages to its completion with the clinician, and consumers spoke about how this impacted their sense of trust and empowerment. Hence, it is important to ask each consumer what their preference is | |
Three | Clinicians should approach offering the consumer-rated measures in a way that demonstrates hope and the opportunity for recovery |
Consumers in this study identified that the way the clinician offers the consumer-rated measure is an opportunity to show transparency, build rapport and trust, and engender a sense of hope and a possibility for change in the future. Understanding that the measures will be repeatedly offered reinforces to consumers that the clinician is also striving with them, believes in them, is prepared to reflect on their practice and change their approach, if needed. It also reinforces the benefits of measuring change and setting goals in their recovery orientated journey | |
Four | Clinicians and consumers should discuss the consumer's ratings of the measures every time they are completed |
Clinicians who bring an interest and curiosity by listening to the consumer about the consumer-rated measure support the development of rapport. Consumers in this study found the completion of the consumer-rated measures, and the discussion with clinicians, provided an opportunity for a structured discussion that also provided the foundation for a much broader and richer discussion. They also indicated that clinicians should explore the reasons for changes in ratings | |
Five | Clinicians should adopt an approach to the use of the measures that encourages the consumer to reflect on their current situation and how it has changed over time |
Consumers in this study identified that the consumer-rated measures can be confronting but gave them an opportunity to reflect and gain perspective of their current situation, and to be honest with themselves and others. Consumers indicated that the measures provide an opportunity for both the clinician and the consumer to track progress over time | |
Six | Consumers should be encouraged to explore the use of the consumer-rated measures for their own self-management and empowerment |
The study found that consumers see the completion of the consumer-rated measures as an opportunity for personal reflection, providing an ability to celebrate the good times and gain pride in their achievements, or to realise they are stuck or that things are not going well. By encouraging the consumer to “hold the pen” while completing their consumer-rated measures, the consumer takes a small step in empowering themselves in their recovery journey with the clinician |
4.1 Limitations
Daya, Hamilton, and Roper (2020) remind us that treatment and care experiences can be diverse. Hence, even though we sought participants with predominantly positive experiences, we are mindful of the small sample, the purposive nature of recruitment and this diversity. There are limitations to conclusions drawn as they only reflect a subgroup of consumers and do not represent the wider perspectives of all consumers. Recommendations for future research to understand wider perspectives and inform future appropriate use of CROMs include gaining more diverse perspectives, that may include subgroups representing diversity in age, gender, culture, socio-economic status, location, mental health diagnosis and so forth. For example, young people may have different views and needs than older adults, as suggested by Tollefsen, Neumer, and Berg-Nielsen (2020) who highlighted young people's focus on control over their life circumstances. A further limitation is in the scope and breadth of the interview guide questions; for example, we did not explore implementation issues in depth consumers' perceptions of how clinicians and services used their feedback to improve practice, and we did not include the views of clinicians and other potential stakeholders (Gelkopf, Mazor, and Roe 2022). These might include service managers, family/kin and lived experience peer workers. Finally, we acknowledge the potential power differentials between consumers and clinicians in the combined focus group for interview guide development.
5 Conclusions
This research provides a detailed description of what a small sample of mental health consumers have said would improve how clinicians offer and deliver CROMs within their practice. The results suggest a number of directions for how clinicians can use the measures in a positive and productive way when working with people with mental health concerns.
6 Relevance for Clinical Practice
Little is known about consumers' experiences of CROMs, and even less is known about consumers' positive experiences of their use in their encounters with clinicians. This research provides valuable insights into why collaboration and partnership in the use of these measures is so important for building trust and engagement with consumers and promoting consumer agency and self-management as part of recovery-oriented outcomes. A better understanding of the consumers' views could increase clinicians' active use of the measures as a valuable source of engaging more effectively with consumers.
Author Contributions
All authors listed meet the authorship criteria according to the latest guidelines of the International Committee of Medical Journal Editors, and all authors are in agreement with the manuscript. Together, the authors conceptualised the study and determined the design and methods. D.J. collected the data, and all authors contributed to data analysis with S.L. leading theme development with robust contributions from all authors. S.L. led the drafting of the manuscript with all authors contributing to its finalisation.
Acknowledgements
We wish to thank the participants who undertook interviews for this research and also the Lived Experience focus group participants who helped us to design the interview questions. We also wish to thank AMHOCN for their support to this project, and Dr Sarah Zabeen who reviewed a draft of this paper. Open access publishing facilitated by Flinders University, as part of the Wiley - Flinders University agreement via the Council of Australian University Librarians.
Ethics Statement
This project was approved by the Flinders University Human Research Ethics Committee (No. 5007).
Conflicts of Interest
The authors declare no conflicts of interest.
Open Research
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.