A couple of important distinctions will help consultants like you create better client results and win more business by, ironically, lowering the quality bar on your work.
Last week’s article talked about perfectionism being a form of procrastination, and the importance of focusing on big wins while inuring yourself to little losses. (Last week’s article also referenced dinosaurs… as any good consulting blog should.)
The article was focused on marketing and infrastructure; nevertheless a couple of readers pointed out that purposely allowing errors in work bound for clients is not good consulting.
Fair enough. Let’s take a look at the role of perfectionism in our consulting output.
Many, many consultants are perfectionists. No matter how good their work is, in their eyes the gaps loom larger than the gains. Their rating scale is skewed, which is why I’ve provided the handy translation tool below.
Aiming for good work (on the perfectionist’s scale) actually robs your client of value. The pursuit of “good” incurs needless delays, during which disgruntled employees and customers leave, problems start to fester and compound, and value-building opportunities are missed.
Still, we’re in uncomfortable territory for many consultants. Therefore, let me give you two practical and concrete quality distinctions that you can apply to your consulting work:
Statistical Significance vs. Clinical Importance
Imagine the laboratory test results for a new cancer drug, called Chocalix: compared to current treatments, Chocalix significantly reduces the number of cancer cells. However, there’s absolutely no change in the percentage of patients who die from cancer, the duration of life, the symptoms, the quality of life, or anything else that the patients would notice. Chocalix would cost ten times as much as current products. Should patients be advised to switch to Chocalix?
As a consultant, you can make your work more robust and, with time, develop ideas, recommendations, and plans that are theoretically better than those you produce fairly quickly. However, is the theoretical advantage clinically important?
Unless taking the time to improve your work product will make a meaningful, noticeable, positive difference in the client’s behavior or results, it’s not worth the additional time and effort.
Correct vs. Complete
Correct means you’re pointing your client in the right direction. Delivering correct, error-free work is paramount. Errors—even small errors—instill doubt in your consulting, your advice and your value. That’s why I’m a stickler for creating error-free client deliverables.
Delivering complete work is an entirely different truffle. Complete means adding information. It’s the consulting equivalent of adding decimal places to increase precision.
You can always pile on more information, expand your work, and investigate further. But will it change the trajectory of your client’s results or your recommendation? If not, don’t do it.
Correct vs. complete requires judgment. How much information is enough?
When I’m conducting interviews in B2B markets, I never let the first three or four interviews guide my opinions. On the other hand, if I conduct ten interviews with people I’d expect to have different opinions and they all say the same thing, I don’t feel compelled to conduct ten more.
These two distinctions share a common question: Will improving my work product meaningfully change the client’s results, behaviors, plans or actions?
Hitting the target now is usually worth more than hitting the bull’s-eye later.
Even better is a collaborative process in which you provide (woefully) incomplete information to the client and give them the opportunity to critique, collaborate, and direct the next round of work. That process prioritizes correctness and real-world effectiveness over completeness and theory.
How else do you deliver high quality without falling into the perfectionist trap?
Text and images are © 2018 David A. Fields, all rights reserved.