Is the NDIS Making People More Disabled?

The government is claiming the NDIS is decreasing functional capacity. What's really going on?

By Sara Gingold

Updated 15 Apr 20247 Sept 2021

This ain’t exactly a hot take, but if the NDIS was decreasing the functional capacity of people with disability, we would have a very serious problem on our hands. We know the NDIS can be traumatising as hell, but could it actually be making the majority of people worse?  

Earlier in the year, NDIS Minister Linda Reynolds surprised most of the disability community by saying that according to the NDIA’s data, participants are reporting “declining functionality” over time. The NDIA’s internal systems rank people on a scale of 1 to 15, with 1 being “high function” and 15 being “low function”. Over the years, fewer people have been classified as high functioning while an increasing number have been recorded as low functioning. This quarter, the Agency really doubled down on the claim by releasing an Addendum to the Quarterly Report dedicated solely to exploring changes in functional capacity by disability cohort, state, and length of time in the Scheme. 

Obviously, Reynolds’s assertion is worthy of some serious scrutiny. So, let’s look at what the hell is going on and whether one of Australia’s largest ever social reforms could actually be making people more disabled.   

 

The NDIA data

The data in the Addendum does appear to show that over time the number of participants classified as high function is decreasing and the number classified as low function is increasing. You can see this trend in the graph below, which covers participants who have been in the Scheme for five years, across all disability cohorts and Australia-wide:

Over 58 pages, the Addendum demonstrates that this trend is pretty consistent across the states and territories, regardless of how long people have been in the Scheme. The pattern is also repeated in nearly all disability cohorts, not just ones you would expect to be degenerative in nature.  

Oddly enough, the Addendum offers little in the way of analysis as to why this trend is occurring. However, reading between the not-so-subtle lines, we can quickly work out why the NDIA is getting a tad anxious. Basically, they believe lower recorded levels of functional capacity is resulting in higher plan budgets, and we all know how they feel about higher plan budgets.  

 

Why is this trend occurring? 

The all-important question we need to be asking is: what the hell is going on here? Well, there are two possible explanations:  

  1. People’s functioning is actually decreasing, perhaps because of the NDIS. 
  2. The data is just plain wrong.  

Which is it? During Senate Estimates on 3 August, both the Scheme Actuary and the Minister rejected the idea that people’s functioning has actually been declining. However, in a somewhat confusing exchange, Reynolds stopped short of suggesting the NDIA’s method of collecting the data was the problem. The implication was that the collection process was sound, but people (i.e., participants, families, and possibly Allied Health professionals) were providing incorrect information to guarantee larger plans.  

Minister Reynolds has acknowledged that the trend in the functional capacity data does not align with the qualitative data the NDIA collects. Reading the results of the outcomes surveys in the Quarterly Reports, you would think the NDIS is doing nothing but dramatically changing lives for the better (arguably, this data swings too far in the opposite direction).

 

The inverse funding model

At the Senate Estimates hearing, Reynolds posited that the NDIS funding model could be driving the reported decrease of function. She is suggesting that people are describing a decrease in function because they do not want their funding to be cut. This idea is somewhat valid, as we all know that the NDIS planning process focuses on deficits and incentivises people to emphasise what they cannot do. However, without understanding how function is being assessed, it is impossible to know whether the funding model is contributing to the trend. Participants might feel they need to report a decrease in function, but to whom are they reporting it? And when? 

It should be no surprise that Reynolds finds this explanation appealing. We know they have the government and NDIA want to make dramatic changes to the planning process. Making the planning process the culprit behind this trend, therefore, makes political sense. 

Why are we having this conversation? 

It’s a bit bloody annoying that the NDIA has highlighted this data and then seemingly walked back from the key conclusion we would be expected to draw from it. 

At Senate Estimates, Reynolds was asked why, if they do not actually believe function is declining, this Quarterly Report Addendum was ever published. Her answer was – to paraphrase liberally – you can’t say you want data transparency and then whinge when you get it. However, this line of argument was more than a bit cheeky. There is a difference between sharing data and highlighting it. Releasing an entire Addendum dedicated to the topic, a move that doesn’t exactly have a great deal of precedent, is firmly in the latter category. 

How does the NDIA measure functional capacity? 

Moving forward, it is still important to understand how the NDIA is measuring function. Reynolds has suggested that because capacity building is the main objective of the Scheme, function levels are the key data set to monitor. While equating capacity building to function levels is rather simplistic, in an ideal world this would still be data worth having. 

The trouble is that we have no idea how this data is being collected.  

Honestly, this is a mystery worthy of its own podcast special. The method of collecting this critical data on functional capacity is strangely absent from the Addendum or any surrounding commentary. I’m no academic, but even I know that methodology is a critical component of any research presentation. Otherwise, people have no idea whether to take your results seriously – which pretty much sums up the position we are in now.

 The Addendum does very, very briefly cover the types of assessments used to measure function. Participants either undergo a generalised assessment tool (i.e., WHODAS 2.0 or PEDI-CAT), a disability-specific assessment tool, or a mixture of the two. However, anybody who has been through the planning process would know that you don’t complete an assessment of this nature at each plan review. These assessments are usually only completed upon entry to the Scheme and when life circumstances change.  So how are they getting year-on-year data? The logical explanation would be that there is a small sample of participants who are undergoing regular assessments. However, the Addendum quashes this theory by stating that the proportion of the participants missing from the data is very small at less than 1%, thus leaving us to wonder how they have been determining functional capacity for 99% of participants for the last five years without any regular assessment process in place. 

As if things were not confusing enough, it is also unclear where these 15 levels of function come from. They don’t correspond to any particular assessment and seem to be an invention entirely of the NDIA’s making. 

 

Disability-specific vs generic assessments

Putting aside the data’s faulty origin story, we can still reach some interesting conclusions from the Addendum. What is particularly noteworthy is that the trend of participants’ functional capacity decreasing over time is significantly more apparent when generic assessment tools like WHODAS 2.0 or PEDI-CAT are used rather than disability-specific tools. 

Below, you can see the trend is quite dramatic when using generic tools: 

However, when it comes to disability-specific assessments, the trend is far more subdued. With this cohort, the levels of change in function are considerably more subtle.

One potential reading of this data would be that the generic assessments are not accurately measuring function in the way disability specific tools are. If this is the case, it is a poor reflection on the NDIA’s now abandoned Independent Assessments (IAs) program, which relied solely on generic assessments. 

For participants who entered the Scheme in June 2017, 50% did a generic assessment, 10% did a disability specific assessment and 40% did a mix of the two. 

 

Final verdict

Is the NDIS making people more disabled? 

It is truly impossible to draw any conclusions from the data available, but there does seem to be general agreement that this is not in fact the case. 

The most likely explanation is that this is a data collection issue. The NDIA is recording funky data – it’s no more simple or complex than that. And let’s be blunt: that isn’t really our problem. The NDIA has work to do fixing how they are measuring function, and 58-page Addendums covering arbitrary disability ratings might help them achieve a political outcome, but it gets us no closer to the truth.  

Authors

Sara Gingold

Explore DSC

Subscribe to the newsletter you’ll actually want to read

Learn from the humans obsessed with Australia’s NDIS. 50,000 readers strong.

Explore DSC Learning