@JohnTheJack: Do you know if the data on school attendance provided is that which allows to calculation of the trial's presepecified primary outcome (ie: that from school records, rather than just the participant's self-report)?
I was expecting there to be two sets of school attendance data, but can only see one.
Thanks for all your work on this.
Ah, see, it's missing the secret unlocking code found in Tarot cards.If there is a lot of missing data how robust can any conclusions be.
If there is a lot of missing data how robust can any conclusions be.
I suspect data protection rules may have stymied procuring official attendance records. When it comes to children data sharing can be extremely strict in education.
The study may have straddled timescale for recent data protection legislation being implemented?
I don't. I think others are better placed than I am to comment on the data.
They used some imputation techniques to add in missing data but I think you need additional variables to be able to do this (things like age, sex, .... ) as you build a model of the relationships between the variables and use those to fill in missing values.
In there paper they did look at the sensitivity to using these techniques and I think if the quote intention to treat figures they use imputation (but not sure). I suspect someone like @Lucibee or @Sid could say more.
If there is a lot of missing data how robust can any conclusions be.
I suspect data protection rules may have stymied procuring official attendance records. When it comes to children data sharing can be extremely strict in education.
The study may have straddled timescale for recent data protection legislation being implemented?
so attendance difference infers that this is compared to a base figure? Do we have these numbers? Surely if the base attendance was 50% (say 95 days out of a potential 190 in a year then one day difference wouldn’t be significant ...sorry I may be being a bit simplistic but shouldn’t they be looking at the context of the difference? I’m just wondering what the basis of that calculation is? I can’t yet look at the data since my iPad doesn’t unzip files ...need to crank open a laptop.Like @Adrian I've had a look at the attendance data.
Before I saw the data (back in 2017), I thought that they should have looked at mean difference and not difference in means (same applies to PACE data imo). That bears out, especially as the data are so skewed. Individual diffs show a much more normal distribution, but would still benefit from baseline adjustment (I don't have the facilities to do this at the moment, nor to test stat sig).
I've shown graphs for the differences between baseline and 6 months and 12 months in SR school attendance, amalgamated to whole days (plus/minus 0.5-1, 1.5-2, 2.5-3 etc).
6 months: Mean diff for SMC is 0.28 days. Mean diff for LP+SMC is 0.81 days. Diff of diffs = 0.54 days
12 months: Mean diff for SMC is 0.79 days. Mean diff for LP+SMC is 1.57 days. Diff of diffs = 0.78 days
School attendance at 6 months was supposed to be a primary outcome measure. The SMILE trial paper only reports attendance at 12 months as a secondary endpoint. Make of that what you will.
Also note that 40% of the data are missing (in both groups) by the time we get to 12 months.
View attachment 7081 View attachment 7082
I think the 'days attendance' is days per week, so the possible range in the attendance data is 0-5. The difference that lucibee has graphed is then the difference for each child between their self reported weekly attendance at baseline and 6 months which has a range from -5 to +5.
And I realise I should have left that for Lucibee to answer. Sorry, fools rush in and all that.
No worries and thanks ...I’m sure she won’t mind. I’m still struggling to make head nor tail of what it says. I guess that laptop needs to come out and I’ll need to refresh my memory on her ‘methodology’ before trying to understand it?I think the 'days attendance' is days per week, so the possible range in the attendance data is 0-5. The difference that lucibee has graphed is then the difference for each child between their self reported weekly attendance at baseline and 6 months which has a range from -5 to +5.
And I realise I should have left that for Lucibee to answer. Sorry, fools rush in and all that.
Actually, has anyone compared the school attendance results to those from the publish paper?
I think this is a very valid point - what is the attendance datum ( say for the previous 3-6 months) from which you are measuring. The interventions have to be set oin context to what is " normal" for these individuals.so attendance difference infers that this is compared to a base figure? Do we have these numbers? Surely if the base attendance was 50% (say 95 days out of a potential 190 in a year then one day difference wouldn’t be significant ...sorry I may be being a bit simplistic but shouldn’t they be looking at the context of the difference? I’m just wondering what the basis of that calculation is? I can’t yet look at the data since my iPad doesn’t unzip files ...need to crank open a laptop.
Baseline data certainly match (from Table 1).
Just checked means from Table 3 - all present and correct.
Thanks Lucibee - so that confirms that this is not the unpublished data from school records.
@JohnTheJack - it could be worth checking if this data was omitted by Bristol in error?
Please provide the following patient-level data at baseline, 3 months, 6 months and 1 year assessments, where available.
1. SF-36 physical functioning scores.
2. School attendance in the previous week, collected as a percentage
(10, 20, 40, 60, 80 and 100 %).
3. Chalder Fatigue Scale scores.
4. Pain visual analogue scale scores.
5. HADS scores.
6. SCAS scores.
7. Work Productivity and Activity Impairment Questionnaire: General Health.
8. Health Resource Use Questionnaire.