Many people have told me that they appreciate my objective, unbiased approach to exercise and nutrition.
I’m going to tell you right now, though, that I am not unbiased.
In fact, I’m very biased.
I’m biased towards science. As a scientist myself, I am, by default, very biased towards science, and for a good reason. Science represents the absolute best process to understand the world around us. I emphasize the word process because that’s what science is.
It’s not a collection of facts or beliefs. It’s a process of how we try to learn what’s true, or at least what is most likely to be true. It involves formulating ideas to describe why we’re observing something (known as formulating hypotheses) and then testing those ideas through data collection and experimentation (known as hypothesis testing). We reject ideas that our data and experiments don’t support, and we further explore the ideas that our data and experiments do support.
Over time, through this scientific process, we develop a body of knowledge of what we consider to be true about the world around us.
In addition to a scientist, I’m a coach. As a coach, I want to get my clients the best results possible. But to do that, I need an understanding of what is true. In the world of exercise and nutrition, there’s a lot of bullshit out there… bullshit that will result in my clients NOT getting what they want and/or wasting their time and money.
Thus, I turn to science to help me sift through all the bullshit and learn about what is true about training and nutrition. Then, using this scientific evidence, I can formulate coaching decisions.
Melding Science and Coaching
Science is a great tool with which to base coaching decisions, but it has its limitations. When scientists do studies, they are examining groups of people. For example, perhaps the scientists are doing a study comparing high volume training to low volume training. They’ll put one group of people on a high volume program, and another group on a low volume program. They’ll then look at strength and muscle gains after 8-12 weeks.
Let’s say the high volume program resulted in greater gains. You might look at that study and conclude that high volume is better, and you start putting all your clients on high volume programs. However, you find that some clients end up doing worse (or even getting hurt), while others do better.
What went wrong?
This is a perfect case of not taking into consideration individual needs and preferences. As I wrote in this article with Bret Contreras, studies can only tell you what works on average. However, individuals may vary widely in how they respond to a given training protocol. In a meta-analysis I published along with Brad Schoenfeld, we found that 10+ sets per muscle group, per week, resulted in the greatest gains in muscle size.
But, this is based on average responses across a number of studies. This doesn’t mean that everyone should be doing 10+ sets per muscle group per week. Some people will deviate from this average, and do better on lower volumes. Others may need very high volumes. Other people may need constant variation in their training volume.
10+ sets per muscle group per week is just a general guideline. It’s a framework from which you can start, but you need to consider individual needs as well.
Let’s get back to my hypothetical example where you put all your clients on a high volume program, and some started to do worse. Perhaps some of the people did worse because they have poor recovery ability and can’t handle high volume. Perhaps some of the people were already doing high volume training and weren’t getting anywhere with it, so it doesn’t make any sense to keep doing high volume if it’s not working. Perhaps some people simply didn’t have the gym time to commit to a high volume program.
This is where the art of coaching comes in.
You take the science as a general guide but then take into consideration individual needs, background, and preferences.
In my hypothetical example, you consider what type of training the person has been doing, how they have responded to volume in the past, the time they have available to train, any injury issues, etc. For example, if they’ve already been training with high volume, perhaps they need a deload or prolonged period of low volume training before you put them on high volume again. If they lack time to train, but just want to bring up a specific body part, perhaps you just hit that one body part with high volume, while putting everything else on a low volume maintenance dose. There are endless ways in which you can meld the science with the needs of the individual.
There Are Infinite Possibilities That Studies Cannot Fully Account For
Another limitation of science is that scientists can’t investigate every variation of training or nutrition program out there.
Think about the endless variations of training programs that can be designed. It’s impossible for scientists to study them all. For example, if you see a study comparing linear periodization to undulating periodization, that study only tells us about a certain type of linear periodization compared to a certain type of undulating periodization, over a very specific time period, in a particular group of people. (For example, perhaps the study was done on untrained men).
Results of the study could be very different with a different linear periodization program, undulating periodization program, time period, or different group of people.
This doesn’t mean you can’t use information from the study to help design your programs; it just means you need to consider the limitations of the research and NOT treat it as a “one-size-fits-all” solution.
Don’t Ignore the Individual
Even if science definitively showed that one particular training program or diet was better than all others, it still doesn’t mean that is what everyone should be doing.
For example, let’s say that research showed that the ABCDE diet resulted in better fat loss than the LMNOP diet. You have a client who hates the ABCDE diet, but likes the LMNOP diet. Should you put that client on the ABCDE diet? Of course not! We know that adherence is by far the biggest predictor of success on any diet.
It doesn’t matter how good the ABCDE diet is on paper; if your client can’t stick with it, it is a bad diet for that client. By trying to put the client on the ABCDE diet, you’re setting the client up for failure. But if you put the client on the LMNOP diet, you are setting up the client for success, even if the former diet does better for people on average.
There are also the scenarios where science show little to no difference between various strategies. This opens up your world of coaching, and gives you tremendous flexibility in terms of fitting a program to a client’s needs, while keeping the program evidence based.
For example, research I published along with Alan Aragon and Brad Schoenfeld showed that meal frequency has little impact on fat loss. That means you have a wide selection of meal frequencies to choose from when programming for a client, and can set up a meal pattern that best fits your client’s preferences and daily schedule, without adversely affecting their results. In fact, by fitting the meal frequency to the client’s schedule, you will likely enhance their results through better adherence.
Differentiate Between What You Think Should Be Happening, and What Is Happening
A lot of times, when we put clients on a training and nutrition program, we think a particular outcome should happen. However, things don’t always happen in the way we think they should, no matter what the science or numbers tell us.
That is where you need to pay attention to the individual, and make adjustments where necessary.
I had one client where I put him on 2300 calories per day as an initial target. According to Kevin’s models, he should have been losing weight on that target. But he wasn’t, and in fact was complaining that he felt overly full and stuffed. I was surprised that, at his body weight and composition, that he wasn’t losing weight. I dropped his calories to 2000. Still, things didn’t budge. I eventually dropped him to 1750 calories per day, and things finally started to move. I was shocked we had to go that low on calorie intake for him, but it ended up working well.
This was a perfect case where I made adjustments based on what actually was happening, using his results and self-reported hunger levels to guide me, versus what I thought should be happening.
Test Ideas, but Keep a Foot in the Science Door
Because of the infinite possibilities in which you can structure a program, and because science can’t investigate them all, you can have many instances where you think a particular concept might work, but you don’t have any hard studies to show that it will.
Does that mean you have to wait until the science catches up to your ideas? Of course not! If your ideas are based on sound reasoning or experience, then feel free to test them out (as long as they won’t do harm!).
For example, I’ve been experimenting with the use of alternating highly submaximal days with maximal days in my own training program. There’s no research on this at all. I started to experiment with it based on some anecdotes of it working quite well. There are also some theoretical reasons why it may be effective (such as allowing for recovery while maintaining gains and keeping your body sensitized to the training stimulus), reasons that I will expand upon in a future article in my Research Review. I had been having some good success with it, so now I’ve been testing the ideas out with some clients.
Now, this doesn’t mean you can test out any random idea that comes to your mind. It needs to have a sound scientific reasoning behind it. For example, having your client do 3 exercises for 3 sets each, 3 days per week, for 3 weeks each month, because you think the number 3 magically aligns the universe with your chakras, is not sound reasoning.
Use Science as a Starter, Not a Statute
Remember that science helps provide you with a guide on how to structure training and diet programs. It’s a good base to start from, but it can’t give you all the answers.
When you are designing a program, you are guessing what you think is going to work. By basing your guesses on science, you make it an educated guess and improve the probability that things are going to work. However, it’s still an educated guess. When you prescribe a calorie intake, it’s an educated guess. When you set a protein level, it’s an educated guess. When you decide how many days per week to train each muscle group, it’s an educated guess. Your guess may be right, or it may need to be modified based on how the client responds.
Really, coaching is a just a series of educated guesses that you’re giving to a client. By melding the world of science with the needs, background, and preferences of the individual, you improve the probability that your educated guesses will be the right ones.
Privacy policy.