Many people have told me that they appreciate my objective, unbiased approach to exercise and nutrition.

I’m going to tell you right now, though, that I am not unbiased.

In fact, I’m very biased.

I’m biased towards science. As a scientist myself, I am, by default, very biased towards science, and for a good reason. Science represents the absolute best process to understand the world around us. I emphasize the word process because that’s what science is.

It’s not a collection of facts or beliefs. It’s a process of how we try to learn what’s true, or at least what is most likely to be true. It involves formulating ideas to describe why we’re observing something (known as formulating hypotheses) and then testing those ideas through data collection and experimentation (known as hypothesis testing). We reject ideas that our data and experiments don’t support, and we further explore the ideas that our data and experiments do support.

Over time, through this scientific process, we develop a body of knowledge of what we consider to be true about the world around us.

In addition to a scientist, I’m a coach. As a coach, I want to get my clients the best results possible. But to do that, I need an understanding of what is true. In the world of exercise and nutrition, there’s a lot of bullshit out there… bullshit that will result in my clients NOT getting what they want and/or wasting their time and money.

Thus, I turn to science to help me sift through all the bullshit and learn about what is true about training and nutrition. Then, using this scientific evidence, I can formulate coaching decisions.

Melding Science and Coaching

Science is a great tool with which to base coaching decisions, but it has its limitations. When scientists do studies, they are examining groups of people. For example, perhaps the scientists are doing a study comparing high volume training to low volume training. They’ll put one group of people on a high volume program, and another group on a low volume program. They’ll then look at strength and muscle gains after 8-12 weeks.

Let’s say the high volume program resulted in greater gains. You might look at that study and conclude that high volume is better, and you start putting all your clients on high volume programs. However, you find that some clients end up doing worse (or even getting hurt), while others do better.

What went wrong?

This is a perfect case of not taking into consideration individual needs and preferences. As I wrote in this article with Bret Contreras, studies can only tell you what works on average. However, individuals may vary widely in how they respond to a given training protocol.  In a meta-analysis I published along with Brad Schoenfeld, we found that 10+ sets per muscle group, per week, resulted in the greatest gains in muscle size.

But, this is based on average responses across a number of studies. This doesn’t mean that everyone should be doing 10+ sets per muscle group per week. Some people will deviate from this average, and do better on lower volumes. Others may need very high volumes. Other people may need constant variation in their training volume.

10+ sets per muscle group per week is just a general guideline. It’s a framework from which you can start, but you need to consider individual needs as well.

Let’s get back to my hypothetical example where you put all your clients on a high volume program, and some started to do worse. Perhaps some of the people did worse because they have poor recovery ability and can’t handle high volume. Perhaps some of the people were already doing high volume training and weren’t getting anywhere with it, so it doesn’t make any sense to keep doing high volume if it’s not working. Perhaps some people simply didn’t have the gym time to commit to a high volume program.

This is where the art of coaching comes in.

You take the science as a general guide but then take into consideration individual needs, background, and preferences.

In my hypothetical example, you consider what type of training the person has been doing, how they have responded to volume in the past, the time they have available to train, any injury issues, etc. For example, if they’ve already been training with high volume, perhaps they need a deload or prolonged period of low volume training before you put them on high volume again. If they lack time to train, but just want to bring up a specific body part, perhaps you just hit that one body part with high volume, while putting everything else on a low volume maintenance dose. There are endless ways in which you can meld the science with the needs of the individual.

There Are Infinite Possibilities That Studies Cannot Fully Account For

Another limitation of science is that scientists can’t investigate every variation of training or nutrition program out there.

Think about the endless variations of training programs that can be designed. It’s impossible for scientists to study them all. For example, if you see a study comparing linear periodization to undulating periodization, that study only tells us about a certain type of linear periodization compared to a certain type of undulating periodization, over a very specific time period, in a particular group of people. (For example, perhaps the study was done on untrained men).

Results of the study could be very different with a different linear periodization program, undulating periodization program, time period, or different group of people.

This doesn’t mean you can’t use information from the study to help design your programs; it just means you need to consider the limitations of the research and NOT treat it as a “one-size-fits-all” solution.

Don’t Ignore the Individual

Even if science definitively showed that one particular training program or diet was better than all others, it still doesn’t mean that is what everyone should be doing.

For example, let’s say that research showed that the ABCDE diet resulted in better fat loss than the LMNOP diet. You have a client who hates the ABCDE diet, but likes the LMNOP diet. Should you put that client on the ABCDE diet? Of course not! We know that adherence is by far the biggest predictor of success on any diet.

It doesn’t matter how good the ABCDE diet is on paper; if your client can’t stick with it, it is a bad diet for that client. By trying to put the client on the ABCDE diet, you’re setting the client up for failure. But if you put the client on the LMNOP diet, you are setting up the client for success, even if the former diet does better for people on average.

There are also the scenarios where science show little to no difference between various strategies. This opens up your world of coaching, and gives you tremendous flexibility in terms of fitting a program to a client’s needs, while keeping the program evidence based.

For example, research I published along with Alan Aragon and Brad Schoenfeld showed that meal frequency has little impact on fat loss. That means you have a wide selection of meal frequencies to choose from when programming for a client, and can set up a meal pattern that best fits your client’s preferences and daily schedule, without adversely affecting their results. In fact, by fitting the meal frequency to the client’s schedule, you will likely enhance their results through better adherence.

Differentiate Between What You Think Should Be Happening, and What Is Happening

A lot of times, when we put clients on a training and nutrition program, we think a particular outcome should happen. However, things don’t always happen in the way we think they should, no matter what the science or numbers tell us.

That is where you need to pay attention to the individual, and make adjustments where necessary. For example, I like to use Kevin Hall’s models to establish calorie targets for clients. They work reasonably well, but they don’t work for everyone.

I had one client where I put him on 2300 calories per day as an initial target. According to Kevin’s models, he should have been losing weight on that target. But he wasn’t, and in fact was complaining that he felt overly full and stuffed. I was surprised that, at his body weight and composition, that he wasn’t losing weight. I dropped his calories to 2000. Still, things didn’t budge. I eventually dropped him to 1750 calories per day, and things finally started to move. I was shocked we had to go that low on calorie intake for him, but it ended up working well.

This was a perfect case where I made adjustments based on what actually was happening, using his results and self-reported hunger levels to guide me, versus what I thought should be happening.

Test Ideas, but Keep a Foot in the Science Door

Because of the infinite possibilities in which you can structure a program, and because science can’t investigate them all, you can have many instances where you think a particular concept might work, but you don’t have any hard studies to show that it will.

Does that mean you have to wait until the science catches up to your ideas? Of course not! If your ideas are based on sound reasoning or experience, then feel free to test them out (as long as they won’t do harm!).

For example, I’ve been experimenting with the use of alternating highly submaximal days with maximal days in my own training program. There’s no research on this at all. I started to experiment with it based on some anecdotes of it working quite well. There are also some theoretical reasons why it may be effective (such as allowing for recovery while maintaining gains and keeping your body sensitized to the training stimulus), reasons that I will expand upon in a future article in my Research Review. I had been having some good success with it, so now I’ve been testing the ideas out with some clients.

Now, this doesn’t mean you can test out any random idea that comes to your mind. It needs to have a sound scientific reasoning behind it. For example, having your client do 3 exercises for 3 sets each, 3 days per week, for 3 weeks each month, because you think the number 3 magically aligns the universe with your chakras, is not sound reasoning.

Use Science as a Starter, Not a Statute

Remember that science helps provide you with a guide on how to structure training and diet programs. It’s a good base to start from, but it can’t give you all the answers.

When you are designing a program, you are guessing what you think is going to work. By basing your guesses on science, you make it an educated guess and improve the probability that things are going to work. However, it’s still an educated guess. When you prescribe a calorie intake, it’s an educated guess. When you set a protein level, it’s an educated guess. When you decide how many days per week to train each muscle group, it’s an educated guess. Your guess may be right, or it may need to be modified based on how the client responds.

Really, coaching is a just a series of educated guesses that you’re giving to a client. By melding the world of science with the needs, background, and preferences of the individual, you improve the probability that your educated guesses will be the right ones.

4
Comments

Please keep questions on topic, write clearly, concisely, and don't post diet calculations.

avatar
700

Privacy policy.

newest oldest
Chris
Chris

Good article, thanks!
One caveat: I think were sometimes a bit too fast to deduce true interindividual differences (aka interactional effects) from sheer variance of results in a study. And introduce the “art of coaching” thereafter as a remedy.

The very reason for building groups and assigning the participants randomly to these groups in an RCT is the knowledge that there will be variance due to a lot of (mostly uncontrollable) acute influences: sleep, motivation, nutrition. But also constant ones like genetics.

Now, if a single person reacts differently than the group mean, either even in the opposite direction, or in the same direction, but to a lesser/greater degree, that does not automatically mean there is a true interactional effect. More probably its just these uncontrollable influences – due to which we did an RCT in the first place – that resulted in that individual difference.

Of course there are hints to whether there is indeed a true interactional effect or if its just variance: How strong is the difference to the mean? Do differing participants share some similarities (gender, age, training age, nutrition)? Its then the task of subsequent research to test a hypothesis of this interactional effect.

As to the remedy of applying the “art of coaching”: I hate to say it, but I think its often self-deception how allegedly valid our art is:

First of all, outside of a study, you always have the problem of n=1 and the uncontrolled setting: If your client reacted mediocrely to one kind of training some weeks ago, you change it and they react better now – was it really the change of training? Or some of the myriads of factors we cancel out in research with randomization and groups?

Second, in some areas we simply cant measure it: Different trainings (e.g. volume, frequency) often result in modest effect sizes in research, and even more modest absolute differences. Outside of a study, even if we could be certain that differences stem from our art, the manipulation of training variables, we often wouldnt be able to measure the success/non success: If training A shows 6% biceps volume increase with training A, and 7,5% with training B after eight weeks – how on earth can we confirm or dispute this or a differing – any – result in our client? It may be easier when coaching for strength, but for hypertrophy, the art appears very shaky for me.

James Krieger
James Krieger

Hi, Chris,

Your points are very valid. Unfortunately, there’s really no good solutions, and we often have to play the hand we’re dealt and accept the limitations of our methods. You are correct that it can be sometimes nearly impossible to measure what is responsible for a client’s success (or lack of success), or to truly determine if our changes actually worked (or was it something else other than our change?). It’s also difficult to know if a response is due to genetic variability or other factors.

It comes down to having to feel comfortable with uncertainty. Science itself is uncertain. Decisions and conclusions are made based on probabilities. Coaching is no different…it’s just that the environment is much more uncontrolled, and there’s a lot more uncertainty. We work with that uncertainty and make educated guesses as to what might work for a client. If something doesn’t work, we make educated guesses as to why it didn’t work. But we can never know for certain.

Troy
Troy

James Krieger, delivering like always 🙂

I had the pleasure of listening to James speak about insulin and weight regulation in Sydney a couple of years ago and not only was he a fantastic guy, but such a knowledgeable one too.

This bridging of the gap between theory and practice is just so spot on in terms of my personal experiences and I was so glad Andy that you were able to have him come in and share this incredibly valuable information on your site.

I particularly really enjoyed the part and the idea of being innovative and trying new things. I often have a conversation with another health practitioner I know who’s a physiotherapist and he’s talked about the idea suggesting that if a technique, method or protocol doesn’t have evidence about it saying it’s bad or dangerous (as James noted) then trial it, it may be something very simply yet very effective. I couldn’t agree more. There’s absolutely nothing wrong using science to guide the principle practices but sometimes in practice, you just can’t depend on what isn’t there yet.

Thanks again for this article James and Andy.

Andy Morgan
Andy Morgan

Thanks for taking the time to write, Troy. 🙂