What’s wrong with NDIS independent assessments? We asked our friend Muriel Cummins from Occupational Therapy Australia’s NDIS Taskforce to help explain why the selected assessment tools are not fit-for-purpose, and why they should not be used to determine plan funding.
Establishing the National Disability Insurance Scheme (NDIS) Act 2013 was a big brave step towards progressive disability policy and support in this country. Disability Discrimination Commissioner Dr Ben Gauntlett recently made the point that the NDIS and its underpinning legislation is comparable to the Sydney Opera House in terms of it being an Australian icon. Dr Gauntlett expressed concern that the latest NDIS Independent Assessment pilot – which assessed functional capacity using clinically unproven methods – may place Australia at risk of breaching its obligations under the United Nations Convention on the Rights of the Persons with Disability.
Occupational Therapy Australia (OTA) is the peak body representing occupational therapists all over Australia. Assessment of function is a central focus of the occupational therapy profession. OTA has looked very carefully into the plans for NDIS assessments, and they have strong and valid concerns.
OTA has made submissions to both the NDIA Consultation and the Australian Parliament’s Inquiry into NDIS Independent Assessments. Myself and another OTA representative appeared as witnesses at the parliamentary inquiry’s first public hearing. We explained why the NDIA’s process around assessing functional capacity is problematic – and what needs to change to keep it ethical and appropriate.
“It is entirely inappropriate to determine a person’s eligibility for NDIS supports using a set of tools which were neither designed nor validated for this purpose and population.” Read more in OTA’s latest submissions to the NDIA here: https://t.co/L28X2dN7bH pic.twitter.com/hmzMksIm8D
— OTA (@otaust) March 9, 2021
After hearing our concerns Senator Carol Brown asked us to provide an answer to one of the biggest questions everyone is asking right now. She asked “Do you see the tools that have been selected by the NDIA – such as the WHODAS and PEDI-CAT – as being appropriate measurements to inform a funding decision in a NDIS plan?”
Or in plainer language …
“Independent Assessments are made up of a number of assessment tools. They will be used to calculate plan funding. Is this ok?”
The short answer is no. (The long answer is at the bottom of this page).
OTA does not think the Independent Assessment (IA) tools are capable of informing funding decisions – either individually or bundled together.
There are two colossal elephants in the NDIA policy room
Both are based on unproven assumptions. That is – things the NDIA think are true, but without any evidence to prove it.
The first assumes that a selection of IA tools can accurately assess functional capacity. OTA does not believe they can.
The second assumes that IA scores can somehow translate into an accurate support-budget. OTA does not believe they can.
Short-cutting assessment of functional capacity cannot result in the creation of equal opportunity for people with disabilities. Functional assessment must be done with integrity of process, and it must honour the rights and lived experience of the person with disability.
OTA believes people with disability deserve better than unproven assumptions directing their NDIS journey. OTA states that:
“people with a disability in Australia have a right to an evidence-based, robust, and safe process for assessment of functional capacity to determine access to the NDIS, and to determine funding for reasonable and necessary supports.”
The proposed Independent Assessment tools are not meant to be used like this
OTA holds a clear position on the inappropriateness of the IA assessment tools, individually or as a bundle, to accurately determine functional capacity. These assessment tools are only accurate when used for their intended purpose, with the disability groups they are researched and designed for. They lose their accuracy, validity and reliability when used in other ways. Validity means, can we trust that the tools can actually measure functional capacity? Reliability means, can we trust that the tools can measure functional capacity consistently, for all disability types?
For example, the Vineland-3 does not have proven validity for use with people with psychosocial disability. It is entirely possible for a person with a psychosocial disability to have substantially reduced functional capacity for self-care. And that could easily slip through the cracks in the IA and be missed entirely.
The IA will likely be more costly – both in terms of dollar amounts and human suffering – than a more robust alternative that includes assessment of individual support needs.
There is no single tool or bundle of tools proven to accurately assess the functional capacity of people with disabilities. It’s not for lack of trying – Australian researchers and the World Health Organisation have been researching this for years.
The IA does not have the proven ability to measure the true impact of disability or substantially reduced functional capacity. The recently closed pilot included a survey which asked people about their satisfaction and experience of having an IA. It did not measure the IA’s specific ability to measure functional capacity.
The NDIA recently acknowledged in an update to their submission to the Australian Parliament’s Inquiry into Independent Assessment, that the IA toolkit lacks validity (meaning accuracy of assessment) and reliability (meaning consistent assessment across disability types) in the NDIS context, stating:
“The tools selected have proven reliability and validity in the contexts for which they are designed but this cannot be extended with great confidence to other contexts” (p.16).
Their proposed solution is to roll out the IA regardless, with “progressive evaluation”. No further explanation is offered on “progressive evaluation”. No detail on methodology (how), or ethical or clinical (responsible and professional) oversight. This demonstrates dangerous disdain for integrity of functional assessment and denies the clinical reality that the IA has not been researched or evaluated for fitness-of-purpose. A plan to do a “progressive evaluation” while acknowledging problems with validity (accuracy) and reliability (consistency), also demonstrates a willingness to take short-cuts on best practice, and a willingness to mandate thousands of people with disabilities to undergo an unproven assessment process.
OTA describes best practice functional capacity assessment as more detailed, in order to build an accurate picture. It should involve a combination of things – self-report tools, observational tools, clinical reasoning and interpretation by skilled therapists, and the inclusion of carer, participant, and existing provider perspectives and cultural considerations. The NDIA regularly seek these best-practice occupational therapy functional capacity assessments for their legal purposes, such as in preparation for an Administrative Affairs Tribunal (AAT) hearing. These assessments are in stark contrast to the IA, which relies almost completely on self-report tools. The IAs run a high risk of under or over-rating functional capacity and are too simplistic.
Calculating the participant budget using the assessment scores
Importantly, we cannot provide an evaluation of the NDIA process for translating IA scores into a participant budget, because the details of how this will happen has not been published by the NDIA. It will likely include weighting or tallying of assessment scores – we can only guess. What is becoming more clear, is that the IA is an ‘input’ into what is likely an automated or artificial intelligence-driven system that creates participant budgets based on persona’s, or stereotypes. The use of automation or artificial intelligence has led to the disability sector and media naming the mechanism ‘robo-NDIS’ and ‘robo-planning’. Individual support needs are not considered in this process.
A few weeks into a parliamentary inquiry and still no transparency on how proposed #independentassessments will be translated into NDIS participant budgets. What’s in the ‘ black box’?? An unproven weighting of assessments? An algorithm? A bunny rabbit? pic.twitter.com/7QcdVoPCZn
— muriel cummins (@muriel_cummins) May 14, 2021
Back to the drawing board
The National Disability Insurance Scheme may be older now, but it is arguably less mature. Occupational therapists join participant and carer groups, advocacy groups, and the disability sector broadly, in asking for a return to the drawing board to co-design method for robust functional capacity assessments and creating participant budgets. It must both meet the needs of NDIS participants and the future sustainability of the NDIS. There are many ways to approach this that would be more accurate and inclusive than what is currently planned.
It is disappointing to discover via leaked marketing documents that NDIA views stakeholders, including allied health peak bodies, as risks to be managed. This approach is the opposite of collaboration and co-design. Building co-designed best-practice approaches can only happen if the NDIA undertakes genuine, inclusive consultation with stakeholders.
We need to return to the basic principles underpinning the NDIS. Without re-commitment to basic principles, we can’t progress. We can’t rebuild multi-partisan support – or make legislative changes that will sustain the NDIS into the future. Without agreed core principles, the NDIS will not reach its potential, and will deny people with disabilities the opportunity to reach theirs.
And it is unthinkable that we may wake one morning in late 2021, to find the Sydney Opera House has been replaced by something dark and dated, with a 1940’s brick veneer.
Muriel Cummins is an occupational therapist and member of Occupational Therapy Australia’s NDIS Taskforce.
Ms Bonnie Allan
Joint Standing Committee on the NDIS
21 May, 2021
Dear Ms Allan
Re: Occupational Therapy Australia response to question taken on notice at public hearing in Melbourne on 23 April 2021
Occupational Therapy Australia (OTA) thanks the Joint Standing Committee on the National Disability Insurance Scheme (NDIS) for the opportunity to appear before the Committee on 23 April 2021, and to respond to the following question from Senator Carol Brown taken on notice at the hearing:
Do you see the tools that have been selected by the NDIA – such as the WHODAS and PEDI-CAT – as being appropriate measurements to inform a funding decision in a NDIS plan?
OTA holds a clear position that people with a disability in Australia have a right to an evidence-based, robust, and safe process for assessment of functional capacity to determine access to the NDIS, and to determine funding for reasonable and necessary supports. For this reason, OTA welcomes Senator Brown’s question regarding the appropriateness of the tools selected by the NDIA to inform NDIS plans and funding decisions.
OTA has considered the relative merits and limitations of the Independent Assessment (IA) measurement tools proposed to inform funding decisions and NDIS plans. This includes close examination of their relevance, utility, and psychometric properties (WHO 2020).
While these tools have reasonably sound intrinsic measurement properties when used for the purpose for which they were specifically designed, their reliability and validity is profoundly compromised when they are used for other purposes. OTA does not believe the NDIA is using the measures for the purpose for which they were intended. These tools were not designed to specifically assess functional capacity to inform funding decisions or plans, and they lack sufficient relevance, sensitivity or specificity to be used in this way.
OTA offers the following specific observations in response to Senator Brown’s question.
Are these tools designed to sufficiently inform funding for NDIS plans?
The IA toolkit is primarily based on the use of global self-report measures related to levels of health and disability in the general population. They have reasonably sound measurement properties when they are used for their intended purpose. However, they are not designed as functional assessment tools and do not provide sufficient detail to effectively determine individual functional capacity or support needs.
A functional assessment identifies what the person can and can’t do, due to disability-related impairment. A support needs assessment identifies what the person needs to address the impairment, so that they can capacity-build, or compensate for the impairment, thereby reducing the impact of the disability on their ability to participate in their lives and community. Delivering standard plans and budgets, sometimes known as ‘roboplans’, based on an IA provides limited detail and effectively bypasses the essential step of individual support needs identification. By neglecting disability support needs, the NDIA runs the real risk of rendering the assessment process more costly in the long term, both in terms of human suffering and dollar amounts. OTA recommends support-needs identification be an essential step in the determination of participant plans and the budgets that support them.
Are the IA standardised assessment tools INDIVIDUALLY appropriate to inform funding decisions for NDIS plans?
The individual assessment tools included within the pilot IA, are robust tools when used for their intended purposes. These are summarised in Table 1. None, however, was developed for the purpose of determining NDIS participant plan budgets. Each tool has evidence-based validity when used with the particular cohorts they were designed for. However, they cease to be valid when they are used with other cohorts. Apart from the WHODAS-2, none was developed to be used in a ‘disability neutral’ manner.
The WHODAS-2 is a global measure of based on the conceptual framework of the International Classification of Functioning, Disability, and Health (ICF). While the WHODAS- 2 can be used across all disability cohorts, it is a functional screen only, not a functional capacity assessment (Ustun et al 2010). For example, it is entirely possible for a person to have substantially reduced capacity for self-care, and for this not to register on the WHODAS-2.
Attempting to use specific IA tools in a disability neutral manner, is problematic in practice. For example, the Vineland 2, a measure of adaptive functioning, has proven validity when used with people who live with developmental disability, such as autism, intellectual disability, or attention-deficit hyperactivity disorder. The Vineland 2 does not have proven validity to assess people with degenerative conditions, or those who live with psychosocial disability where capacity fluctuates over time. The NDIA intends the Vineland-2 to be used with all NDIS applicants and participants, including cohorts the tool has not been validated for. The current IA pilot espouses this practice. The company publishing the Vineland-2 is aware of the limits of its proven validity, and states that the ‘burden of proof’ for appropriate use of the tool sits with the NDIA and NDIA-contracted companies. So, we have a situation whereby participants are undergoing a lengthy, deficit-focused assessment, which is not a valid tool to use with a large proportion of those participants and which will result in flawed scores being used to inform funding decisions in NDIS plans.
The Care and Needs Scale (CANS) was specifically developed for use with people living with acquired brain injury, and was not developed to be used uniformly across all disability cohorts. Therefore, it cannot reliably inform funding decisions for all NDIS participants.
The PEDI_CAT is a norm-referenced deficit focused self-report measure that parents complete on behalf of their child (Haley et al 2011). The tool measures the extent of a child’s functional delay in relation to normal age-related milestones. The tool lacks sufficient detail to specifically determine the individual functional or support needs of a child and, crucially, it fails to measure their functional potential. For these reasons, this tool has limited usefulness in determining funding for NDIS plans.
|What does it measure?
|Compatible with NDIA -intended ‘disability neutral’ approach (a uniform assessment across all disability types)
|Evidence that the tool can reliably assess functional capacity to inform NDIS funding decisions
|Environmental factors – not functional capacity
|❓ Unclear e.g. not researched or validated with people with psychosocial disability
|The extent of a child’s functional delay in relation to normal age-related milestones – not functional
|❓ Unclear. Designed for children from 6 months to 7.5 years. Has a greater weighting toward physical disability
|Adaptive behaviour – not functional capacity
|❌ No. Proven valid for use with specific cohorts
|Care and Needs Scale (CANS)
|Care and needs – not functional capacity
|❌ No. Proven valid for use with specific cohorts
|Lower Extremity Function Scale
|Lower extremity function – not functional capacity
|❌ No. Proven valid for use with specific cohorts
Are the IA standardised assessment tools COLLECTIVELY appropriate to determine participant plan funding?
There are fundamental flaws in using the unrelated IA self-report measures collectively to provide an overarching picture of a participant’s level of disability (Madden et al 2015). This is because these tools have been developed and validated to measure distinctly different constructs in distinctly different ways. While they may loosely fit under a general concept of disability, they do not collectively measure disability or functional capacity as a construct.
They are invalid and meaningless when they are used collectively. The NDIA appears to have chosen this approach to determine functional capacity, and to use this as a basis for disability funding, yet to the best of our knowledge this approach has never been proven effective anywhere in the world. In fact, research indicates the absence of a single assessment tool, or suite of tools, proven to have the capability to do this (Madden et al 2015).
To validate the NDIA’s proposed IA as a basis for participant plan funding, it would need to prove that:
- The suite of assessment tools that constitute an IA can accurately assess functional capacity in a uniform, disability neutral manner;
- The IA can accurately predict participant funding based on the IA score, in the absence of a support needs assessment; and
- Functional capacity can be measured in a ‘disability neutral’ manner across all cohorts.
Best practice functional capacity assessment is comprised of self-report tools, observational tools, clinical reasoning and interpretation by appropriately skilled clinicians, and the inclusion of carer, participant, and existing provider perspectives and cultural considerations, to triangulate and formulate an accurate assessment of functional capacity. Complete reliance on self-report tools runs a high risk of under or over-rating functional capacity, and is overly simplistic. Basing participant plan funding on an IA grounded in self-report measures is fraught with inaccuracies and bias. Extensive reliance on telehealth to complete the IA with vulnerable or remote cohorts may further compromise the accuracy of this approach.
How will the scores be determined and how will they be used?
There has been a lack of transparency around how the NDIA intends to use the suite of assessment tools collectively to determine participant plan funding. The agency has not revealed how the scores from individual assessment tools will combine to give an overall IA score, or how a suite of IA scores will generate participant plan funding. OTA cannot provide an evaluation of the NDIA process for translating IA scores into a participant budget, for the simple reason that the details of this process have not been disclosed by the NDIA. The mechanism may include a weighting of certain assessment scores against disability type to define functional capacity, or a formula, or an algorithm – OTA and the sector can only speculate.
OTA would have serious concerns if the NDIA did intend to collate the scores of the IA tools to determine funding decisions for NDIS plans. Generally, detailed factor analysis is required to sum scores in any measurement tool and this is based on the premise that the content and items in the measurement tool are measuring the same construct. It is not possible to sum scores from different measurement tools, as they do not measure the same things in the same way, or to the same extent. If the NDIA intends to sum the scores of IA tools that are distinctly different measures, it is incumbent on the agency to demonstrate that the methods it is using to sum these scores are appropriate and psychometrically sound for the purpose of informing the funding for NDIS plans.
As the IA tools are intended to inform funding decisions related to NDIS plans, OTA requests the Joint Standing Committing seek full transparency around this mechanism, and support the establishment of a process of clinical scrutiny and sector consultation.
OTA has reviewed the independent assessment tools selected by the NDIA to determine funding decisions in NDIS plans. Through this process, OTA has come to the clear conclusion that the Independent Assessment (IA) tools, used both individually and collectively, are not appropriate measures to inform funding decisions for NDIS plans.
OTA urges the Committee to seek full transparency from the NDIA around the aim of the IA pilot, and recommends that the IA is not implemented until adequate, independent research proves its validity for the NDIA’s intended purpose. This research is too important to occur behind closed doors, without independent academic and ethical oversight.
OTA thanks members of the Joint Standing Committee for their continued interest in this most important matter.
Chief Executive Officer
Haley, S. M., Coster, W. J., Dumas, H. M., FRAGALA‐PINKHAM, M. A., Kramer, J., Ni, P., … & Ludlow, L. H. (2011). Accuracy and precision of the Pediatric Evaluation of Disability Inventory computer‐adaptive tests (PEDI‐CAT). Developmental Medicine & Child Neurology, 53 (12), 1100-1106.
Madden et al (2015) In search of an integrative measure of functioning. https://pubmed.ncbi.nlm.nih.gov/26016438/. Retrieved November 23rd, 2020.
McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. Behavior research methods, 1-19.
Üstün, T. B., Kostanjsek, N., Chatterji, S., & Rehm, J. (Eds.). (2010). Measuring health and disability: Manual for WHO disability assessment schedule WHODAS 2.0. World Health Organization.
WHO (2020). World Health Organization. International Classification of Functioning, Disability and Health. Geneva, Switzerland. ICF Core Sets. Retrieved 23 November 2020 from https://www.icf-core-sets.org.