Patient led measure of outcomes

hotblack

Senior Member (Voting Rights)
Thinking about how we can measure if treatments work. A lot of the questionnaires and scales seem to try to compare across subjects, which is really difficult for many of us, they don’t fit our experiences or severity. So people try to capture a range of experiences and we end up with huge long questionnaires. So how do we measure outcomes?

How about this
- Pick and describe in your own words 5 activities that you feel define your current limitations, include 2 things things you can only occasionally do
Examples: get to the toilet in the morning, sit up comfortably throughout the day, have a 5 minute conversation with someone , have a shower, walk to the car (I don’t know I haven’t done those last two for years but you get the idea, the usual sort of things we see on questionnaires, but defined by the patient)
- Count how many days you can do these per month, before and after interventions, without negative impact, record weekly
- Maybe add a measure of how many days are ‘good’ ‘average’ or ‘bad’ for you, record daily

This would be person specific but capture the changes which are relevant and/or important to them and how their ME/CFS affects them. It would be quicker than most things to record but I think would allow measurement of if an intervention has actually worked.

Probably needs some refinements but…thoughts?
 
Last edited:
Personalised outcome measures are perfectly valid (see Edwards, Isenberg and Snaith for a lupus trial using them) but seem to go against the popular conception of standardising everything. Committees that like 'minimum data sets' would hate it.

In theory a standardised set of measures that covered most people's key deficits will pick up the same changes but it will have more noise and in this context have the downside of being long.

I think it is a good idea.
 
So someone doing a small proof of concept or dose response trial might get away with personalised measures, but a big Phase III would be less likely to?

That wouldn't necessarily be a problem, would it—bigger trial, more £££, can do belt and braces (personalised and standardised) to keep everyone happy?
 
This is how I track how I’m doing. Makes it easier to point out to people that I’m not doing better physically because if I were, I would occasionally leave the prison I’ve been in for far too long.

Personalised outcome measures are perfectly valid (see Edwards, Isenberg and Snaith for a lupus trial using them)
Is it this one?
 
I think FUNCAP covers this and has the advantage of applying across all severity levels and taking into account whether doing the activity means you can do little else that day or it doesn't affect you. The cumulative effect of activities and their after effects need to be taken into account I think.
This same discussion is also happening on another thread.
 
So someone doing a small proof of concept or dose response trial might get away with personalised measures, but a big Phase III would be less likely to?

It is likely to be harder to persuade all the people likely to be involved in a big phase III trial. On the other hand, if a small phase II study showed a clear difference using personalised measures and looked solid on things like blinding, then scaling up to a large confirmatory study using the same personalised measures might be something to argue for.

In our lupus trial there was no difference between two treatments suggesting that the more invasive, toxic and expensive version that had become fashionable was not justified. If we had found a difference, even a trend that suggested there might be a significant difference with a larger trial, then I would have pushed for using the same system.
 
I think FUNCAP covers this and has the advantage of applying across all severity levels and taking into account whether doing the activity means you can do little else that day or it doesn't affect you. The cumulative effect of activities and their after effects need to be taken into account I think.
This same discussion is also happening on another thread.
I had a quick squiz at FUNCAP and didn't like it because it requires you to know how long you'd take to recover if you did the thing in question. But if you never attempt the thing because you know you shouldn't, you'll never know.
 
I think FUNCAP covers this and has the advantage of applying across all severity levels and taking into account whether doing the activity means you can do little else that day or it doesn't affect you
The problem is FUNCAP is still long, it’s 55 questions and has a 6 point scale. There are a bunch of things which are not applicable so will just get the same answer, it’s trying to cover too much and is too complex.

In his proposal, the question is simply how do we measure if a patient considers themselves improving. That is surely what matters?

If people were able to pick 5 items from FUNCAP that they feel apply to them, and use them in the framework I propose maybe that would work?
 
the question is simply how do we measure if a patient considers themselves improving. That is surely what matters?

Yes, I agree.

And if the activities are recorded by participants at the time, not just done from memory, it's as objective as anything else we have. Committees might prefer stationary bikes, but they aren't really objective; participants may have had to rest up beforehand and some are left flattened afterwards. An ongoing record of real life activities over a long period shows a trend that's difficult to fake.
 
The problem is FUNCAP is still long, it’s 55 questions and has a 6 point scale. There are a bunch of things which are not applicable so will just get the same answer, it’s trying to cover too much and is too complex.

In his proposal, the question is simply how do we measure if a patient considers themselves improving. That is surely what matters?

If people were able to pick 5 items from FUNCAP that they feel apply to them, and use them in the framework I propose maybe that would work?
There is a 27 question version of FUNCAP as well.
 
There is a 27 question version of FUNCAP as well.

Oh I’ve missed that thanks. And I do like what they’re trying to capture, their descriptors are good. But I like the idea of making it even more lightweight.

The studies I’ve found best have been a very short weekly questionnaires, I can keep track of a few data points over a week and generally judge if it’s been a good or bad day or week. Beyond that…
 
Some questions I have, and am unsure of the answers, would appreciate feedback

- Would picking 5 descriptors from Funcap 55 be easier/better than completely patient created descriptors or are these too restrictive?
- Should he question be can you do these things without significant negative impact? Or comfortably at the time? Or just did you do these things? How important is it to capture negative knock ons? Or would these simply reduce how often you did things so be captured by frequency measure?
- I like the idea of the patient measuring these 5 things weekly, it’s easy enough to then do a monthly total, what do people think? Would monthly work better or less well?
- Would we need the daily or weekly good/bad/average measure too? Does this give us extra useful data or is it too much to record?

Maybe we can organise our own trial of this framework? I could see people using whatever method they prefer, a spreadsheet or just a piece of paper. And not sharing any data but sharing what they like or don’t like, what works or doesn’t work, etc.

There’s no intervention to test, but we could learn how to best make the framework/process to work for different people. If we can get something that works for patients maybe some researchers would take it up?
 
Is it this one?
The discussion section of the paper is really interesting and very relevant. I’d quote it but copying from the png/pdf is giving terrible formatting issues which are a faff for me to clean up atm.
 
Back
Top Bottom