Preprint: Designing and Developing an eHealth Program for Patients With Persistent Physical Symptoms: Usability Study 2023 Christensen et al

Andy

Retired committee member
Abstract

Background:
Patients with persistent physical symptoms presenting in primary care are often affected by multiple symptoms and reduced functioning. The medical and societal costs of these patients are high, and there is a need for new interventions tailored to both the patients and health care system.

Objective:
This study aimed to examine the usability of an unguided, self-help treatment program, “My Symptoms,” developed to assist patients and general practitioners in symptom management.

Methods:
In all, 11 users (4 patients with persistent physical symptoms and 7 laypeople) participated in web-based thinking-aloud interviews involving the performance of predefined tasks in the program. Thematic analysis was used to categorize the severity of usability issues. General usability heuristics were cross-referenced with the usability issues.

Results:
The analysis identified important usability issues related to functionality, navigation, and content. The study shows how therapeutic knowledge in some cases was lost in the translation of face-to-face therapy to a digital format. The user testing helped uncover how the functionality of the digital elements and general navigation of the program played a huge part in locating and accessing the needed treatment. Examples of redesign to mediate the therapeutic value in the digital format involving health care professionals, web developers, and users are provided. The study also highlights the differences of involving patients and laypeople in the interviews.

Conclusions:
Taking the experience of common symptoms as a point of departure, patients and laypeople contributed to finding usability issues on program functionality, navigation, and content to improve the program and make the treatment more accessible to users.

https://humanfactors.jmir.org/2023/1/e42572
 
Ugh. I was reviewing Precision Medicine lately - it is a nebulous term that can cover all sorts of things, like genetic screening, and also use of technology. It seems that useful medical innovations don't bother using the Precision Medicine branding, but the use of technology to create behavioural interventions does. The term sort of creates a warm fuzzy glow about how screening at birth can reduce the incidence of serious health problems, or how testing wastewater can find out where Covid outbreaks are, things like that. But a recent review of all papers using the term precision medicine found that they were overwhelmingly about apps and self-help treatments - commercial products designed to put the onus of achieving health on the individual. It was interesting that the review found that only a very low percentage of the studies were randomised controlled trials, or even envisaged doing a randomised controlled trial at some point. Of those for which there was trial data, very few reported a useful outcome.

This paper seems to be more an advertising project than real science - e.g. "We talked with 11 people, only 4 of whom actually have the (made-up) condition to get feedback on our app."
To assist GPs in symptom management and to offer patients with PPS a new treatment option, we developed a novel eHealth program, “My Symptoms.” The program content is inspired by cognitive behavioral therapy. It provides psychoeducation on symptoms and modules on the impact of lifestyle, stress and strain, thoughts, feelings, values, and self-care. Throughout the modules, interactive tools to support behavior change are embedded. The patient can interact with modules on his or her own accord (Fig2).

The content of “My Symptoms” is presented in various forms such as text, pictures, figures, interactive elements, audio, and video. The program is prescribed by the GP but is unguided, that is, no health care professional (HCP) will assist the patient in the use of the program. The program is a responsive web application that is accessible from computers, tablets, and smartphones through a web browser.

Figure 2
a411863e9c4ebaaf96a61079913b59f5.png

A market research process is presented as the 'democratisation of the development'.
Here, emphasis was on the democratization of the development from different stakeholders and participants via iterative processes.
 
It sounds like usablity testing was worth doing, but as to the content of the material, which was not the subject of the research, it sounds like the same old CBT.

Crossposted with Hutan.

Edit:
It was pretty basic flaws in the design the lay people were picking up - which showed up just how rubbish the design was.
 
Last edited:
The sample:
They aimed to get 12 patients who experience PPS but for whom the severity is not bad enough to qualify them for a diagnosis of persistent physical symptom disorder:
We included a convenience sample of primary care patients and laypeople. In all, 6 GPs identified and invited 4 patients aged 18-65 years with PPS.

They got 4 patients - Covid and all that, but it's still surprisingly low from 6 GPs given that people with persistent symptoms are supposed to be clogging up the GP clinics all over the world. And also given the user research was done online. But never mind, they refer to a 'convenience' sample, they found some random people to pretend to be PPS patients.
To finalize the study within its time limits, we therefore chose to supplement the user inclusion with 7 laypeople recruited through personal networks. As bothersome symptoms are a general phenomenon we expected laypeople to be able to relate to current or prior symptom experiences.

Perhaps you too will be blown away by the sophistication of the 'thinking aloud' method:

To investigate usability, we applied the thinking-aloud method. The aim of this method was to “capture” the users’ thoughts as they navigated the “My Symptoms” program to gain insight into how they experienced the program in the context of actual use and what they found easy or difficult to do or understand [20]. The project group translated these verbalized thoughts into specific changes that needed to be made in the program.

Because they found the patients and lay people interacted similarly with the buttons etc, they mostly used lay people for later rounds:
Rounds 2 and 3 focused more on testing predefined, specific elements in the program rather than core functionality. From the first round of testing, we observed that laypeople and patients interacted similarly to buttons, sliders, and other interactive web elements, which was why we included more laypeople than patients for these rounds.
So, that sounds like democratisation when it comes to what size button is preferred, but not much patient input when it comes to specific content.
 
Presumably this should be just an early stage in testing an interactive digital product. As I said above, the flaws they found were very basic, like people needing to be able to find their way through the process, and not get bogged down with too much text or with unclear tasks. It hardly needs to be written up as a scientific paper, it's just basic first stage product testing.
 
What a waste of resources. There are actual experts in information architecture, design and user experience. This is performative nonsense, pretending to involve patients but on the equivalent of deciding which walls to paint a different color after the entire hospital has been built.

Medicine is becoming more and more bureaucratic nonsense every day. It has completely lost touch that the mission is supposed to be about helping sick and injured people, not merely executing scripts from a book of scripts.

My feeling of this is that involving patients will make them feel like they own part of the outcome. Except that only applies to this small group, the rest of patients won't care one bit about this Potemkin medicine-by-checkbox since it's useless.
 
Patients were more interested in the content than the lay people, even though the developers "explicitly stated that reading the content thoroughly was not necessary during the usability testing".

Core issues in this category were related to pages being text heavy, the perception of program legitimacy, and the use of language directed at the user. ...These issues were especially evident when comparing feedback from patients and laypeople. Patients were more focused on the framing of the content in the program than laypeople. For example, in content, quotation marks were used to stress some of the medical terms. Most patients responded poorly toward this usage and figured the terms were made up, making them question the legitimacy of the program and their own experience with their symptoms. This was not an issue with laypeople. Patients also spent more time investigating content as means of navigation and interaction, even though we explicitly stated that reading the content thoroughly was not necessary during the usability testing.

It sounds as though some of the medical terms had speech marks put around them. It's hard to imagine what they were, but not so hard to imagine patients being annoyed that their symptoms were being invalidated.

Yes, this isn't science, it's a record of a fairly low level effort at usability testing. It isn't worthy of being a paper.

It's an example of what seems to be an explosion of health apps, designed to "help" people be responsible for their own health. They may sometimes be useful, but it's easy to see how they will appeal to health system decision-makers who don't have enough money or medical staff to have patients actually receive care. As with many of these papers, there is little consideration of how delivery of health care via apps may increase societal inequities -not everyone has access to a desk-top computer, not everyone has confidence with computers.

abstract said:
The medical and societal costs of these patients are high, and there is a need for new interventions tailored to both the patients and health care system.
It's revealing that the abstract talks about the patients costing the health system and society a lot, rather than the ongoing existence of the health condition being the problem.

There is no consideration of the validity of the underlying premise of the software - presumably that people can be trained to realise that the excruciating back pain or whatever it is they feel is just normal, and nothing to complain about. As I mentioned in my first post, like most of the papers on health apps that were included in a very large review, there is no stated intention to trial the software at some point to see if it does in fact work.
 
Last edited:
Back
Top Bottom