The Theory of Constraints Changed My Life — Here’s What the MBA Leaves Out and What Actually Works
Goldratt’s Bottleneck Gospel Is the Most Immediately Applicable Operations Tool in the MBA Curriculum — and the Most Consistently Misapplied One in the Real World
Every Machine Running at 80%, 40% of Orders Shipping Late, and One Constraint Nobody Had Bothered to Find
Get the book: The Unfair Advantage: Weaponizing the Hypomanic Toolbox | Subscribe: Stagnation Assassin Show on YouTube
I walked into a manufacturing plant where the production managers were proud of their utilization rates. Every machine was running at 80% or above. They were efficient. They were also shipping late on 40% of their orders. Here’s what nobody had done: find the bottleneck. One machine — one single constraint — was determining the output of the entire system, and every other machine running at 80% was just building inventory that piled up in front of it. They were efficiently producing extra waiting. Eliyahu Goldratt’s Theory of Constraints would have fixed this in 30 days. It took them two years and a consultant. That story is why I think every operator running a production or service environment needs to master this framework before they spend a dollar on any efficiency improvement program, and it’s why I called TOC the most immediately applicable operations tool in the MBA curriculum — and one of my absolute favorites. It changed my life. Let me tell you what the MBA version gets right, what it leaves out, and what you actually need to deploy it in the real world.
The Textbook Version: What the MBA Gets Right — and It’s Not Wrong
Goldratt introduced the Theory of Constraints in his 1984 novel The Goal, written with Jeff Cox — my absolute favorite business book, full stop. The framework’s foundational argument is both simple and revolutionary: every system has at least one constraint, a bottleneck that limits the system’s throughput. And improving anything other than that constraint improves absolutely nothing. That last sentence is the one that should stop every operator who has ever approved an efficiency improvement initiative without identifying the system constraint first.
The five focusing steps provide the operational methodology. First, identify the constraint — find the bottleneck that limits system output. Second, exploit the constraint — get maximum output from it without any additional investment. Third, subordinate everything else — adjust all other processes to support that constraint’s maximum output rather than their own local efficiency. Fourth, elevate the constraint — if the first three steps are insufficient, invest in increasing constraint capacity. Fifth, repeat — once the constraint is broken, find the next one and restart the process. That sequence is the entire operational playbook, and it is correct.
The companion metric framework is equally important and equally underused. Goldratt replaced traditional cost accounting with three throughput accounting measures: throughput — the rate at which the system generates money through sales; inventory — the money the system has invested in purchasing things it intends to sell; and operating expense — the money the system spends turning inventory into throughput. The goal is to maximize throughput while minimizing inventory and operating expense. That three-variable framework is a superior decision-making tool to standard cost accounting for any constrained production system, and the MBA programs that teach it without teaching operators how to actually change their performance metrics to reflect it are teaching the theory without the implementation.
Where TOC Changed Everything I Did in Manufacturing
I have applied the Theory of Constraints in manufacturing environments at some of the largest industrial companies in America, and the track record is consistent every time it is correctly deployed: the exploit step alone — getting maximum output from the existing constraint through scheduling improvement, reduced setup time, and quality improvement at the bottleneck — typically produces 15% to 30% throughput improvement before any capital is spent at all. That is not a theoretical projection. That is the documented operational result of correctly identifying the constraint and focusing every resource in the system on its protection and maximum utilization.
What hits me every time I deploy this framework is the same thing that hit me in that plant with the 80% utilization rates and 40% late shipments: the organization was not failing because it was inefficient. It was failing because it was efficiently optimizing the wrong variables. Every machine running at 80% looks like operational health if your metric is utilization. It looks like catastrophic misallocation of effort if your metric is system throughput. The Theory of Constraints forces the metric change before it demands the operational change — and the metric change is where most of the resistance lives. Visit the Stagnation Assassin Show podcast hub for more on the metric architecture changes that separate operations that improve from operations that optimize the wrong measurements faster and faster.
TOC also applies with equal force outside manufacturing, which most operators who encounter it in an industrial context miss entirely. I have applied constraint thinking to sales pipelines — where is the bottleneck in the conversion process? — to customer service operations — where is the bottleneck in resolution time? — and to product development — where is the bottleneck in time to market? The constraint is always present somewhere in every system. The only question is whether you found it deliberately or whether it found you through late shipments and customer complaints. The framework that works on a production floor works on a service workflow. The identification methodology is identical. The cultural resistance to subordination is, if anything, worse in service environments where the concept of intentionally slowing down a non-constraint department is even more counterintuitive than it is on the shop floor.
What the MBA Leaves Out: Three Real-World Implementation Failures
Here’s where the professors sit down and the operators stand up. TOC has three implementation challenges that the textbook version systematically underweights, and each one is a deployment killer if you don’t anticipate it before you begin.
Constraint identification is harder than it sounds in any real operating environment. In a simple production system with a single product and a linear process, the constraint is usually obvious — it’s the station with the queue in front of it. In complex environments with multiple products, shared resources, and variable demand, identifying the true system constraint requires careful data analysis, and organizations frequently misidentify it. Focusing improvement efforts on a non-constraint resource while the real bottleneck goes unaddressed is not just inefficient — it is actively counterproductive, because the improvement at the non-constraint increases the rate of inventory accumulation in front of the actual bottleneck. I have walked into plants where the operations team was convinced they had found the constraint and were wrong. The throughput analysis — mapping the complete process from raw material to shipped product and identifying where inventory is actually accumulating — is the only reliable identification method, and it requires more analytical rigor than most plant floor conversations produce.
The subordination step is where most TOC implementations fail, and the failure mode is entirely cultural rather than technical. Telling a high-utilization department to slow down — to intentionally reduce their efficiency to avoid piling inventory in front of the constraint — violates every management instinct of every manager who has been measured on utilization, efficiency, or throughput at the local level. The local efficiency metrics that most organizations use will actively fight the subordination step. The department head whose performance review is built on utilization rates will not voluntarily reduce utilization to protect a system-level throughput metric unless the performance management architecture has been changed to reward system optimization rather than local efficiency. That change is enormous, uncomfortable, and non-negotiable. Without it, the subordination step collapses under the weight of the existing incentive structure.
Constraints move. Once you elevate a constraint and break it, a new constraint emerges somewhere else in the system. Organizations that celebrate breaking a constraint and then stop applying the five focusing steps discover that they have simply relocated the bottleneck without realizing it, and the throughput improvements they achieved begin eroding against the new constraint that is now running unmanaged. TOC is not a one-time improvement project. It is a permanent operational discipline — the five focusing steps applied continuously, cycling to the new constraint every time the current one is broken. Grab The Unfair Advantage for the complete framework on building the operational discipline architecture that sustains TOC as a continuous improvement engine rather than a one-time consultant engagement.
The Operator’s Upgrade: Three Moves to Deploy TOC Without the Two-Year Consulting Engagement
The manufacturing plant in my opening story took two years and a consultant to figure out what Goldratt’s framework would have solved in thirty days. Here are the three moves that get you to thirty days without either.
Conduct a throughput analysis before any efficiency improvement initiative. Map the complete process from raw material to shipped product. Find where inventory is actually accumulating — not where the utilization reports say the bottleneck should be, but where the physical inventory is actually piling up. That accumulation point is your constraint. Everything else in the improvement priority list is context until that constraint is addressed. This analysis takes days, not weeks, and it should be the prerequisite for approving any capital improvement project in a production environment.
Change your performance metrics to support constraint exploitation. Stop measuring utilization on non-constraint resources — it does not matter, and measuring it actively incentivizes the behavior that fights subordination. Start measuring throughput units per day as the primary operational performance metric. Add buffer management — the maintenance of protective inventory immediately before the constraint — as a tracked KPI. The metric architecture change is the cultural lever that makes the subordination step survivable: when the performance management system rewards system throughput rather than local efficiency, the department head whose utilization drops in service of the constraint’s protection is rewarded rather than penalized.
Assign a constraint manager. Give one person explicit accountability for the performance and protection of the system constraint. That person’s sole job is to ensure that the constraint never sits idle and is never starved of material. In a throughput-limited system, this role is more operationally important than any other position in the facility. The constraint is where 100% of the system’s output improvement lives — everything else is noise until the constraint is protected, fully exploited, and actively managed by someone whose performance measurement reflects the constraint’s output rather than their department’s local efficiency. Visit the Todd Hagopian blog for more on the constraint manager role and the performance architecture required to make it the most important operational position in a throughput-limited environment.
Frequently Asked Questions
What is the Theory of Constraints and why is it more immediately applicable than other operations frameworks?
The Theory of Constraints, introduced by Eliyahu Goldratt in his 1984 novel The Goal, holds that every system has at least one constraint — a bottleneck that limits the system’s throughput — and that improving anything other than that constraint improves nothing. Its immediate applicability comes from the precision of the diagnostic: rather than broad improvement initiatives across the entire operation, TOC focuses every resource on the single point that determines 100% of the system’s output improvement potential. The exploit step alone — getting maximum output from the existing constraint without additional capital — typically produces 15% to 30% throughput improvement before any investment is made. No other operations framework delivers that return that quickly from identification alone.
What is throughput accounting and how is it different from standard cost accounting?
Throughput accounting replaces the cost-per-unit focus of standard accounting with three system-level metrics: throughput (the rate the system generates money through sales), inventory (money invested in things intended for sale), and operating expense (money spent turning inventory into throughput). The critical difference is that throughput accounting treats direct labor as a fixed cost rather than a variable cost, which eliminates the standard cost accounting incentive to maximize production volume at every workstation regardless of downstream constraint capacity. Standard cost accounting rewards high utilization everywhere. Throughput accounting rewards high throughput at the system level. In a constrained production environment, those two reward structures produce directly opposite operational decisions.
Why does the subordination step fail so often in real manufacturing environments?
Because subordination requires telling a high-performing, high-utilization department to intentionally slow down — to reduce their local efficiency metric in service of a system-level throughput objective — and virtually every performance management architecture in manufacturing rewards local efficiency rather than system throughput. The department head whose utilization drops from 85% to 60% because they’re subordinating to the constraint is performing exactly as TOC requires and will be penalized by every traditional performance metric simultaneously. Without changing the metric architecture before deploying subordination, the cultural resistance is overwhelming and institutionally rational. The subordination step requires a performance management change first. Most implementations attempt subordination without making that change, and the cultural resistance destroys the implementation.
How do you apply the Theory of Constraints outside of manufacturing?
The application is identical in structure to the manufacturing deployment: map the complete process from input to output, find where work is accumulating (the queue equivalent of inventory piling in front of a bottleneck), and apply the five focusing steps to that point. In a sales pipeline, the constraint might be the proposal review process, the legal sign-off step, or the executive approval cycle — wherever deals are accumulating in the funnel without moving forward. In product development, it might be the testing phase, the design review cycle, or the regulatory approval process. In customer service, it might be the escalation resolution time, the system access step, or the specialist availability queue. The constraint identification methodology — find where the work piles up — is universal. The cultural resistance to subordination is equally universal and requires the same metric change to overcome.
What is the constraint manager role and why is it more important than traditional department manager positions?
In a throughput-limited system, 100% of the output improvement potential lives at the constraint. Every other resource in the system is either feeding the constraint or processing its output — neither of which determines total system throughput. The constraint manager’s job is to ensure that the constraint never sits idle (starved of material or waiting for a preceding process) and never processes defective input (which would waste constraint capacity on rework). That single responsibility — protecting and maximizing constraint utilization — determines the entire system’s throughput output. A traditional department manager at a non-constraint station manages a resource that, by definition, cannot improve system output regardless of their performance. The constraint manager manages the resource that, by definition, determines everything. The performance management architecture should reflect that difference explicitly.
About This Podcaster
Todd Hagopian has transformed businesses at Berkshire Hathaway, Illinois Tool Works, and Whirlpool Corporation, selling over $3 billion of products to Walmart, Costco, Lowes, Home Depot, Kroger, Pepsi, Coca Cola and many more. As Founder of the Stagnation Intelligence Agency and former Leadership Council member at the National Small Business Association, he is the authority on Stagnation Syndrome and corporate transformation. Hagopian doubled his own manufacturing business acquisition value in just 3 years before selling, while generating $2B in shareholder value across his corporate roles. He has written more than 1,000 pages of books, white papers, implementation guides, and masterclasses on Corporate Stagnation Transformation, earning recognition from Manufacturing Insights Magazine and Literary Titan. Featured on Fox Business, Forbes.com, OAN, Washington Post, NPR and many other outlets, his transformative strategies reach over 100,000 social media followers and generate 15,000,000+ annual impressions. As an award-winning speaker, he delivered the results of a Deloitte study at the international auto show, and other conferences. Hagopian also holds an MBA from Michigan State University with a dual-major in Marketing and Finance.
Get the book: The Unfair Advantage: Weaponizing the Hypomanic Toolbox | Subscribe: Stagnation Assassin Show on YouTube
About This Episode
Host: Todd Hagopian
Organization: Stagnation Assassins
Episode: Stagnation Assassin MBA — Theory of Constraints: Goldratt’s Bottleneck Gospel, the Five Focusing Steps, and the Three Real-World Implementation Failures the Textbook Skips
Key Insight: Improving a non-constraint is just waste dressed up as productivity — find the bottleneck, fix the bottleneck, and stop optimizing everything else until you do.
Your assignment this week: before you approve any efficiency improvement initiative in your production or service operation, conduct a throughput analysis. Map the complete process from input to output and find where work is physically accumulating. That accumulation point is your constraint. Every improvement dollar spent anywhere else is a dollar that cannot improve your system’s output. Visit toddhagopian.com for the complete Theory of Constraints implementation guide and the constraint manager role deployment protocol. Is your operation improving its constraint — or efficiently optimizing everything around it while the bottleneck runs your numbers?

