Active Support Measure: a multilevel exploratory factor analysis
Abstract
Background
Active Support is a person-centred practice that enables people with intellectual disabilities (IDs) to engage in meaningful activities and social interactions. The Active Support Measure (ASM) is an observational tool designed to measure the quality of support that people with IDs living in supported accommodation services receive from staff. The aim of the study was to explore the underlying constructs of the ASM.
Methods
Multilevel exploratory factor analysis was conducted on ASM data (n = 884 people with IDs across 236 accommodation services) collected during a longitudinal study of Active Support in Australian accommodation services.
Results
Multilevel exploratory factor analysis indicated that 12 of the ASM's 15 items loaded on two factors, named Supporting Engagement in Activities and Interacting with the Person.
Conclusions
The 12-item ASM measures two dimensions of the quality of staff support. Both technical and interpersonal skills comprise good Active Support.
Introduction
Active Support has been one of the most frequently researched staff practices in supported accommodation services for adults with intellectual disabilities (IDs). It is a way of supporting people with IDs to engage in meaningful activities and social interactions (Mansell & Beadle-Brown 2012). Active Support is a person-centred practice that has been informed by behaviour theory (Toogood et al. 2016). Staff use the everyday opportunities available for a person to be engaged (e.g. domestic, leisure and social) and support their participation through, for example, task analysis, providing the right type and amount of assistance, choice in terms of what they do and when, while tailoring communication to the person and having warm interactions (Bigby & Humphreys 2023).
Research into Active Support began in the UK in the 1980s and has since been researched in other countries, such as Australia, New Zealand, Taiwan and the USA (Flynn et al. 2018). In a systematic review and meta-analysis of 14 studies addressing the effectiveness of Active Support, Flynn et al. (2018) found following its implementation that there were significant increases in the amount of time people with IDs were engaged in meaningful activities and social interactions, the amount of time they received assistance from staff and the quality of staff support. More recent research has focused on the implementation of Active Support in disability support organisations, identifying barriers such as staff turnover (Qian et al. 2019) and predictors of effective Active Support, such as strong practice leadership, staff training and support from organisational senior leaders (Bigby et al. 2020a,b). In recognition of the evidence, in Australia, the Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability (2023) recommended in its final report that disability organisations implement and staff be trained in Active Support.
The Active Support Measure (ASM; Mansell & Elliot 1996; Mansell et al. 2005), which has been used across effectiveness studies, was designed to evaluate the quality of staff support received by a person with an ID. It is completed following structured observations of staff and residents in accommodation services, coinciding with the use of another observational tool - the Engagement in Meaningful Activity and Relationships (EMAC-R; Mansell & Beadle-Brown 2005) measure of resident engagement and staff assistance. The ASM comprises 15 items (Table 1) rated on a scale of 0–3 to yield a maximum score of 45, which is converted to a percentage. People supported by staff providing quality support (high ASM scores) have higher levels of engagement, as shown in studies employing correlation (Beadle-Brown et al. 2016), multiple regression (Mansell et al. 2008) and multilevel modelling (Humphreys et al. 2020).
Item |
---|
1. Age appropriateness of activities and materials |
2. ‘Real’ rather than pretend or very simple activities |
3. Choice of activities |
4. Demands presented carefully |
5. Tasks appropriately analysed to facilitate service user involvement |
6. Sufficient staff contact for service user |
7. Graded assistance to ensure service user success |
8. Speech matches developmental level of service user |
9. Interpersonal warmth |
10. Differential reinforcement of adaptive behaviour |
11. Staff notice and respond to service user behaviour |
12. Staff manage serious challenging behaviour well |
13. Staff work as a coordinated team to support service user |
14. Teaching is embedded in everyday activities |
15. Written plans in routine use |
Although the ASM has been shown to be a reliable predictor of engagement, two issues have been raised. First, the 15 items form a single scale, suggesting that they tap a single dimension of support (Devellis 2012); however, this assumption has not been examined. Second, all 15 items are rarely observed. For example, Mansell et al. (2003) reported that inter-rater reliability was not calculated for the item ‘Staff manage serious challenging behaviour well’ because “too little challenging behaviour was observed” (p. 174). Results by Beadle-Brown et al. (2011 as cited in Mansell & Beadle-Brown 2012) indicated that the items ‘Staff manage serious challenging behaviour well’ and ‘Written plans in routine use’ were either not or infrequently observed. Bigby et al. (2020b) noted that two items about staff support for challenging behaviour were excluded from calculating ASM scores when this behaviour was not observed (i.e. ‘Differential reinforcement of adaptive behaviour’ and ‘Staff manage serious challenging behaviour well’).
The aim of this study was to explore the underlying constructs of the ASM given reliance on this tool in research. Using data from previous research, we conducted a multilevel exploratory factor analysis (MEFA) to identify the constructs.
Methods
Design
A secondary analysis was conducted on data collected from a longitudinal study of Active Support in Australian supported accommodation services (Mansell et al. 2013; Bigby et al. 2019, 2020b). The design of the present study was cross-sectional and exploratory. Ethics approval was granted by the La Trobe University Human Research Ethics Committee.
Participants
Data were drawn from the longitudinal study collected from 2009 to 2019. Sixteen disability organisations participated: four in each year of data collection and 12 either entered or exited at different times across the study duration. People with IDs were eligible for participation if they lived in an accommodation service managed by a participating organisation and were aged 18 years or older. Written consent was provided either by the participant or, if unable to provide this themselves, by a family member or guardian.
Measures
The ASM measures the quality of staff support received by a person with an ID. Researchers completed the ASM for each person with an ID observed by assigning ratings for the 15 items (Table 1) on a 4-point scale anchored by 0 (the support was not provided/lowest level of performance) and 3 (good support/performance). Ratings were based on 2 h of structured observations in services, during which the EMAC-R was completed (Mansell & Beadle-Brown 2005).
Active Support Measure scores are calculated as a percentage of the scale's maximum: that is, tallying the ratings for the 15 items, with 45 as the maximum possible, then converting to a percentage. Adjustments are made to the scale's maximum when items 10, 12, or 13 are unrated because, for example, the person did not display maladaptive behaviour, serious challenging behaviour was not observed or only one staff member was observed on shift, respectively. In which case, the ratings for these items are treated as missing and not calculated for that person.
Information about participants' adaptive and challenging behaviours was collected using the Short Adaptive Behavior Scale (SABS; Hatton et al. 2001) and the Aberrant Behavior Checklist (ABC; Aman et al. 1995).
Procedures
For each observation, a researcher visited the accommodation service, usually from 16:00 to 18:00 h, and using the EMAC-R and momentary time sampling, collected data on the amount of time each person with an ID was engaged in meaningful activities and social interactions and received assistance from staff (Bigby et al. 2019, 2020b). Based on the support received during the 2 h, the researcher completed the ASM for each person with an ID observed. On average, four to five people with IDs and two staff were observed per visit.
Paper questionnaires comprising demographic questions (i.e. age, gender and ethnicity), the SABS and the ABC were posted to participating organisations for distribution in the accommodation services. Staff members who worked in the services and knew the participants well completed them. Questionnaire return was by prepaid envelopes or researcher collection.
Analyses
Data across eight distinct timepoints from the longitudinal study were combined in spss 25. Following initial cleaning, the database comprised 1713 observations of people with IDs. For individuals appearing in the database more than once, one observation was randomly selected to ensure data points were independent. The final dataset comprised data from 884 individuals with IDs living in 236 accommodation services.
Multilevel exploratory factor analysis was conducted to explore the underlying structure of the ASM at the individual-level (i.e. level 1 or within cluster level) and accommodation service level (i.e. level 2 or between cluster level) using Mplus 8.4. MEFA accounts for individuals with IDs nested in accommodation services and dependency in ASM scores due to receiving support from the same staff.
In MEFA, the number of factors and items on which they load may be the same or different at level 1 and level 2. When conducting MEFA and interpreting results, whether the level 1 or level 2 solutions have equal or different importance is determined by the primary unit of interest and the intended use of scores (Stapleton et al. 2016). Regarding the ASM, both individual-level (level 1) and service-level (level 2) scores are meaningful; however, individual-level ASM scores are arguably of greater interest because the ASM measures the quality of support received by an individual, which may be similar or different across people living in the same home, and individual ASM scores have been used in previous research as independent (Humphreys et al. 2020) and dependent variables (Bigby et al. 2020b). Service-level scores represent the quality of staff support received by all the residents in a home (i.e. ASM scores averaged across residents) and could be used to compare scores across services within organisations, such as high-performing and low-performing services. There were an insufficient number of organisations (n = 16) to conduct an MEFA at the organisational level.
Given the ASM uses a 4-point response scale, the recommended estimation method for ordinal data of weighted least squares with mean and variance adjusted was used (Muthén & Muthén 1998–2017). The number of factors to extract was examined using the eigenvalues greater than one rule, scree plot and comparing model fit statistics. Geomin oblique rotation was applied to the extracted factors. Multiple MEFA models were requested (one to three factors at level 1 and level 2), and the results were compared. The model fit indices examined and their cut-off values to indicate good fit were the root mean square error of approximation (RMSEA) ≤0.06, comparative fix index (CFI) ≥0.95, Tucker–Lewis index (TLI) ≥0.95 and standardised root mean square residual (SRMR) ≤0.08 (Hu & Bentler 1999). A minimum item coefficient of ≥0.40 on the pattern matrix was used to retain items, and a minimum of three items per factor. Missing data were handled using pairwise present analysis in Mplus (Muthén & Muthén 1998–2017).
Results
Descriptive data
Participants with IDs were, on average, aged 46 years (SD = 13), and 55% were male. The mean SABS score was 148.49 (SD = 61.71, range = 21.98–291.02), indicating that, on average, participants had severe IDs, but there was variability across participants, ranging from mild to profound IDs. The mean ABC score was 26.78 (SD = 23.86, range = 0–124).
Table 2 presents for each ASM item the number of cases, mean, standard deviation, skewness and kurtosis. High levels of skewness and kurtosis were found for items 12 and 15: ratings of ‘0’ (not observed) were high for both (Appendix 1). Missing data were found for items 10 (86.1%), 11 (0.1%), 12 (7.5%) and 13 (16.0%), resulting in removal of item 10. The mean ASM score was 55.31% (SD = 23.93, range = 0–100). Also presented in Table 2 are intraclass correlation (ICC) values for the ASM items. ICC values for the items were high, ranging from 0.34 to 0.83, with all values >0.05, indicating clustering of ASM scores within accommodation services and providing statistical justification for MEFA.
Item | N | Mean | SD | Skewness | Kurtosis | ICC |
---|---|---|---|---|---|---|
1. Age appropriateness of activities and materials | 884 | 2.15 | 1.23 | −0.99 | −0.78 | 0.54 |
2. ‘Real’ rather than pretend or very simple activities | 884 | 1.72 | 1.06 | −0.57 | −0.92 | 0.47 |
3. Choice of activities | 884 | 1.60 | 1.14 | −0.16 | −1.39 | 0.61 |
4. Demands presented carefully | 884 | 1.62 | 1.15 | −0.20 | −1.40 | 0.54 |
5. Tasks appropriately analysed to facilitate service user involvement | 884 | 1.43 | 1.07 | 0.03 | −1.24 | 0.56 |
6. Sufficient staff contact for service user | 884 | 1.78 | 1.03 | −0.26 | −1.14 | 0.49 |
7. Graded assistance to ensure service user success | 884 | 1.47 | 1.13 | 0.06 | −1.39 | 0.53 |
8. Speech matches developmental level of service user | 884 | 2.55 | 0.70 | −1.46 | 1.44 | 0.51 |
9. Interpersonal warmth | 884 | 2.54 | 0.65 | −1.33 | 1.46 | 0.57 |
10. Differential reinforcement of adaptive behaviour | 123 | 2.11 | 0.90 | −0.85 | 0.02 | 0.34 |
11. Staff notice and respond to service user behaviour | 883 | 2.33 | 0.81 | −1.04 | 0.40 | 0.46 |
12. Staff manage serious challenging behaviour well | 818 | 0.06 | 0.38 | 6.50 | 43.06 | 0.47 |
13. Staff work as a coordinated team to support service user | 743 | 1.62 | 0.90 | −0.26 | −0.66 | 0.80 |
14. Teaching is embedded in everyday activities | 884 | 0.49 | 0.95 | 1.72 | 1.46 | 0.65 |
15. Written plans in routine use | 884 | 0.13 | 0.49 | 3.69 | 12.71 | 0.83 |
- SD, standard deviation; ICC, intraclass correlation.
Multilevel exploratory factor analysis
The initial inclusion of 14 of the 15 ASM items (item 10 removed) in the MEFA resulted in problems with convergence, which were addressed through the removal of items 12 (‘Staff manage serious challenging behaviour well’) and 15 (‘Written plans in routine use’). These two items had been identified as problematic in preliminary analyses (e.g. inspection of correlation matrices and single-level exploratory factor analysis), and when corrections were made to enable convergence in the MEFA using randomly generated sets of starting values (Muthén & Muthén 1998–2017), both items had weak pattern coefficients (<0.30) at level 1.
For the remaining 12 items, examination of eigenvalues and the scree plot for level 1 indicated that two factors be extracted, and for level 2, the eigenvalues also indicated that two factors be extracted. Examination of fit statistics for various models with differing numbers of factors confirmed that two factors provided a good fit: RMSEA = 0.05, CFI = 0.99, TLI = 0.98, SRMR level 1 = 0.06 and SRMR level 2 = 0.03. Furthermore, both factors comprised more than three items with pattern coefficients >0.40.
Table 3 presents the pattern coefficients for the 12 retained items at level 1 and level 2. Pattern coefficients represent the unique relationship between the item and the factor, controlling for the effects of the other factor, where 0 means no relationship, and higher positive or lower negative values (e.g. ±1.00) indicate a stronger relationship (Fabrigar & Wegener 2012). At level 1, the largest pattern coefficients for each item ranged from 0.469 to 0.979. One item – 7 ‘Graded assistance to ensure service user success’ – loaded (>0.40) on both factors, though it loaded higher on factor 1. The first factor – named Supporting Engagement in Activities – comprised seven items. The second factor – named Interacting with the Person – comprised five items. The correlation between the two factors was 0.62, indicating that they measured related but distinct dimensions of Active Support.
Item | Level 1 pattern coefficients | Level 2 pattern coefficients | ||
---|---|---|---|---|
Factor 1 | Factor 2 | Factor 1 | Factor 2 | |
1. Age appropriateness of activities and materials | 0.977 | −0.003 | 1.089 | −0.153 |
2. ‘Real’ rather than pretend or very simple activities | 0.979 | −0.052 | 0.907 | 0.003 |
3. Choice of activities | 0.797 | 0.192 | 0.773 | 0.253 |
4. Demands presented carefully | 0.642 | 0.367 | 0.675 | 0.375 |
5. Tasks appropriately analysed to facilitate service user involvement | 0.696 | 0.323 | 0.771 | 0.251 |
6. Sufficient staff contact for service user | 0.338 | 0.524 | 0.022 | 0.897 |
7. Graded assistance to ensure service user success | 0.503 | 0.421 | 0.465 | 0.595 |
8. Speech matches developmental level of service user | 0.001 | 0.726 | 0.049 | 0.827 |
9. Interpersonal warmth | −0.209 | 0.854 | −0.042 | 0.733 |
11. Staff notice and respond to service user behaviour | 0.049 | 0.759 | −0.034 | 0.980 |
13. Staff work as a coordinated team to support service user | 0.245 | 0.469 | 0.305 | 0.590 |
14. Teaching is embedded in everyday activities | 0.472 | 0.327 | 0.333 | 0.475 |
- Item numbers are from the Active Support Measure. Coefficients highlighted in bold indicate the factor on which the item was placed. Underlined coefficient indicates cross-loading.
At level 2 of the MEFA, 10 items loaded highest on the same factors as the level 1 solution; however, two items loaded on different factors – items 7 and 14 – which loaded on factor 2 (Table 3). As with the level 1 solution, item 7 was found to cross-load (pattern coefficients >0.40). The correlation between the two factors at level 2 was 0.73. The structure coefficients for the level 1 and level 2 MEFA solutions are provided in Appendix 2.
Discussion
Secondary analysis of a large dataset enabled exploration of key properties of the ASM in light of its frequent use in research into the quality of staff support in accommodation services. A key finding that the ASM taps two constructs of Active Support amends assumptions of a single construct, as indicated by the way it has been used and reported in previous research (e.g. Mansell et al. 2008; Bigby et al. 2020b). Further, three of the 15 items did not contribute to these constructs, calling into question their relevance to the measurement of the quality of Active Support.
In conducting the MEFA at two levels, the individual level was of primary interest for interpretation because the ASM measures the quality of support received by an individual, and in previous research, individual ASM scores have been used as predictor (Humphreys et al. 2020) and outcome variables (Bigby et al. 2020b). The seven items (1–5, 7 and 14) that loaded onto the first factor – Supporting Engagement in Activities – related directly to the way staff support a person to engage in activities: offering real and age-appropriate activities, providing choice of activities, preparing the activities and communicating what is involved, using task analysis and graded assistance to enable engagement, and using opportunities to teach new skills. The five items that loaded onto the second factor (6, 8, 9, 11 and 13) – Interacting with the Person – related directly to the way staff interact with the person: communicating in ways the person understands, noticing and responding to the person's communication (verbal and nonverbal), being warm and respectful, having sufficient contact with them and working as a team to provide support. Conceptually, these two factors tap the technical and interpersonal aspects of good support identified by Mansell and Beadle-Brown (2012), and the large correlation between them supports their argument about their interdependence: that is, without good interpersonal relations, the technical aspects of support are less likely to be delivered effectively. Item 7 ‘Graded assistance to ensure service user success’, which is an essential element of Active Support (Mansell & Beadle-Brown 2012), loaded on both factors indicating that it is a component of both aspects of support.
The finding that the ASM's factor structure was similar at the individual and service levels provided further support that the ASM measures two constructs. Comparisons between the individual and service level solutions showed that most items loaded on the same factors, however, because two items loaded on different factors, subscale scores need to be calculated differently depending on the unit of analysis. In analyses where individual ASM scores are of primary interest, items 7 (‘Graded assistance to ensure service user success’) and 14 (‘Teaching is embedded in everyday activities’) belong to factor 1. However, in analyses where the service level is the unit of interest, such as to examine or compare the average level of support received by all the residents in a service, items 7 and 14 belong to factor 2 when calculating service-level ASM scores (i.e. aggregating to the service level). These differences between the level 1 and level 2 solutions are likely to reflect differences in the constructs at the individual and service levels.
The value of items not retained in the MEFA (items 10, 12 and 15) remains unknown in the absence of observing these supports. In the case of challenging behaviour (relevant to items 10 and 12), the fact that it was rarely seen during the observations is reflective not only of previous observational research using the ASM (Mansell et al. 2003) but also of other observational research focused on challenging behaviour (Nankervis et al. 2020). Although Active Support has been proposed to complement Positive Behaviour Support and benefit people who display challenging behaviour (Beadle-Brown & Hutchinson 2016), the infrequent occurrence of challenging behaviour in direct observation research (apart from self-stimulatory; Totsika et al. 2010; Beadle-Brown et al. 2012) raises questions about the suitability of the ASM items that relies on witnessing it (i.e. items 10 and 12). Similarly, regarding item 15 ‘Written plans in routine use’, both the current and previous research (Beadle-Brown et al. 2011 cited in Mansell & Beadle-Brown 2012) indicated that such plans are rarely referred to by staff during observations. Furthermore, Active Support training developed by Mansell and colleagues following the ASM's design placed less emphasis on written plans (Mansell & Beadle-Brown 2012; Beadle-Brown & Hutchinson 2016).
Limitations and future research
The extent to which the factor structure findings would replicate with data from countries other than Australia requires testing, given potential for cultural and staff training differences. Nonetheless, much of the research using the ASM has been conducted in the UK and Australia and has drawn on similar staff training materials and methods; hence, similarities are likely. Regarding implementation research, Active Support training for staff has typically focused on the technical aspects of support, but there is potential to further develop the interpersonal aspects (Johnson et al. 2017), such as communicating effectively and creating warm and friendly interactions. Further research is needed to test the factor structure through multilevel confirmatory factory analysis. An examination of concurrent validity is also warranted such as by examining associations between the two dimensions of Active Support with levels of engagement, such as determined using the EMAC-R.
Conclusions
The ASM was found to comprise 12 items that measure two dimensions of the quality of staff support: Supporting Engagement in Activities and Interacting with the Person. The results indicate that summing items within each dimension (i.e. treating them as sub-scales) is more appropriate and informative than summing across all items as an overall index of the quality of staff support. A revised version of the ASM with the 12 items included in the final factor structure to measure the quality of staff support may prove more informative in both practice and research.
Acknowledgement
We acknowledge the contribution of Jane Bowden-Dodd for assisting with data management in the initial stage of the analysis.
Open access publishing facilitated by La Trobe University, as part of the Wiley - La Trobe University agreement via the Council of Australian University Librarians.
Source of funding
No external funding was received for the research reported in the paper.
Conflict of interest
No conflicts of interest have been declared.
Appendix: Response scale rating frequencies for Active Support Measure items (n = 884 cases)
Item | 0 | 1 | 2 | 3 | Missing |
---|---|---|---|---|---|
1. Age appropriateness of activities and materials | 189 | 40 | 100 | 555 | |
2. ‘Real’ rather than pretend or very simple activities | 202 | 52 | 422 | 208 | |
3. Choice of activities | 215 | 176 | 239 | 254 | |
4. Demands presented carefully | 218 | 163 | 238 | 265 | |
5. Tasks appropriately analysed to facilitate service user involvement | 222 | 229 | 261 | 172 | |
6. Sufficient staff contact for service user | 117 | 242 | 247 | 278 | |
7. Graded assistance to ensure service user success | 231 | 233 | 194 | 226 | |
8. Speech matches developmental level of service user | 11 | 74 | 216 | 583 | |
9. Interpersonal warmth | 9 | 52 | 275 | 548 | |
10. Differential reinforcement of adaptive behaviour | 9 | 16 | 50 | 48 | 761 |
11. Staff notice and respond to service user behaviour | 30 | 103 | 299 | 451 | 1 |
12. Staff manage serious challenging behaviour well | 792 | 9 | 8 | 9 | 66 |
13. Staff work as a coordinated team to support service user | 95 | 205 | 328 | 115 | 141 |
14. Teaching is embedded in everyday activities | 670 | 63 | 83 | 68 | |
15. Written plans in routine use | 816 | 21 | 43 | 4 |
Appendix: Structure coefficients from multilevel exploratory factor analysis on Active Support Measure at level 2 (n = 884 participants living in 236 accommodation services)
Item | Level 1 structure coefficients | Level 2 structure coefficients | ||
---|---|---|---|---|
Factor 1 | Factor 2 | Factor 1 | Factor 2 | |
1. Age appropriateness of activities and materials | 0.976 | 0.606 | 0.978 | 0.641 |
2. ‘Real’ rather than pretend or very simple activities | 0.946 | 0.557 | 0.909 | 0.664 |
3. Choice of activities | 0.917 | 0.688 | 0.957 | 0.817 |
4. Demands presented carefully | 0.871 | 0.767 | 0.948 | 0.867 |
5. Tasks appropriately analysed to facilitate service user involvement | 0.898 | 0.757 | 0.954 | 0.813 |
6. Sufficient staff contact for service user | 0.664 | 0.734 | 0.677 | 0.914 |
7. Graded assistance to ensure service user success | 0.766 | 0.735 | 0.899 | 0.934 |
8. Speech matches developmental level of service user | 0.454 | 0.727 | 0.653 | 0.863 |
9. Interpersonal warmth | 0.323 | 0.724 | 0.492 | 0.702 |
11. Staff notice and respond to service user behaviour | 0.522 | 0.790 | 0.680 | 0.955 |
13. Staff work as a coordinated team to support service user | 0.538 | 0.622 | 0.735 | 0.812 |
14. Teaching is embedded in everyday activities | 0.676 | 0.621 | 0.680 | 0.718 |
Item numbers are from the Active Support Measure. Coefficients highlighted in bold indicate the factor on which the item was placed.
Open Research
Data availability statement
Research data are not shared.