The fact that the thing you’re trying to measure is subjective by nature, doesn’t change the inherent unreliability of the subjective outcome measures. It has to be accounted for, even if it’s the best you’ve got. And there is no law that says that your best is good enough for science. So we have to entertain the possibility that it might not be good enough, period.
I suspect the whole mental health research industry has issues with measurement (too much subjective stuff and also some really badly done questionnaires that shouldn't be used as a measurement scale). But perhaps that just says we need better research into measurement approaches - and with the advent of reasonably good wearables (and even phones that are fairly accurate on step counts) much more is possible. Advances in AI could allow for much better recording at the time or interpretation of activity. (things like automatic speech recognition and sentiment analysis could potentially help).
But if measurement systems are not robust enough to give reliable results then doing research on humans using them is unethical.