4
Comments

Please keep questions on topic, write clearly, concisely, and don't post diet calculations.

avatar
700

Privacy policy.

newest oldest
Chris
Chris

Good article, thanks!
One caveat: I think were sometimes a bit too fast to deduce true interindividual differences (aka interactional effects) from sheer variance of results in a study. And introduce the “art of coaching” thereafter as a remedy.

The very reason for building groups and assigning the participants randomly to these groups in an RCT is the knowledge that there will be variance due to a lot of (mostly uncontrollable) acute influences: sleep, motivation, nutrition. But also constant ones like genetics.

Now, if a single person reacts differently than the group mean, either even in the opposite direction, or in the same direction, but to a lesser/greater degree, that does not automatically mean there is a true interactional effect. More probably its just these uncontrollable influences – due to which we did an RCT in the first place – that resulted in that individual difference.

Of course there are hints to whether there is indeed a true interactional effect or if its just variance: How strong is the difference to the mean? Do differing participants share some similarities (gender, age, training age, nutrition)? Its then the task of subsequent research to test a hypothesis of this interactional effect.

As to the remedy of applying the “art of coaching”: I hate to say it, but I think its often self-deception how allegedly valid our art is:

First of all, outside of a study, you always have the problem of n=1 and the uncontrolled setting: If your client reacted mediocrely to one kind of training some weeks ago, you change it and they react better now – was it really the change of training? Or some of the myriads of factors we cancel out in research with randomization and groups?

Second, in some areas we simply cant measure it: Different trainings (e.g. volume, frequency) often result in modest effect sizes in research, and even more modest absolute differences. Outside of a study, even if we could be certain that differences stem from our art, the manipulation of training variables, we often wouldnt be able to measure the success/non success: If training A shows 6% biceps volume increase with training A, and 7,5% with training B after eight weeks – how on earth can we confirm or dispute this or a differing – any – result in our client? It may be easier when coaching for strength, but for hypertrophy, the art appears very shaky for me.

James Krieger
James Krieger

Hi, Chris,

Your points are very valid. Unfortunately, there’s really no good solutions, and we often have to play the hand we’re dealt and accept the limitations of our methods. You are correct that it can be sometimes nearly impossible to measure what is responsible for a client’s success (or lack of success), or to truly determine if our changes actually worked (or was it something else other than our change?). It’s also difficult to know if a response is due to genetic variability or other factors.

It comes down to having to feel comfortable with uncertainty. Science itself is uncertain. Decisions and conclusions are made based on probabilities. Coaching is no different…it’s just that the environment is much more uncontrolled, and there’s a lot more uncertainty. We work with that uncertainty and make educated guesses as to what might work for a client. If something doesn’t work, we make educated guesses as to why it didn’t work. But we can never know for certain.

Troy
Troy

James Krieger, delivering like always 🙂

I had the pleasure of listening to James speak about insulin and weight regulation in Sydney a couple of years ago and not only was he a fantastic guy, but such a knowledgeable one too.

This bridging of the gap between theory and practice is just so spot on in terms of my personal experiences and I was so glad Andy that you were able to have him come in and share this incredibly valuable information on your site.

I particularly really enjoyed the part and the idea of being innovative and trying new things. I often have a conversation with another health practitioner I know who’s a physiotherapist and he’s talked about the idea suggesting that if a technique, method or protocol doesn’t have evidence about it saying it’s bad or dangerous (as James noted) then trial it, it may be something very simply yet very effective. I couldn’t agree more. There’s absolutely nothing wrong using science to guide the principle practices but sometimes in practice, you just can’t depend on what isn’t there yet.

Thanks again for this article James and Andy.

Andy Morgan
Andy Morgan

Thanks for taking the time to write, Troy. 🙂