The Incentive Audit – Precision balance with hidden counterweight

Economics & Markets

Mental Model No. 002

The
Incentive
Audit

Diagnosis by Dollar Sign

“Show me the incentive and I will show you the outcome.”
Charlie Munger
Strategy & Decision Economics & Markets Psychology & Behavior Organizational Design

A surgeon in Lincoln, Nebraska spent years removing healthy gallbladders from patients who needed no surgery. He was paid per procedure. He believed he was being thorough. The incentive had rewritten his clinical judgment so completely that he could not see the corruption from the inside. This is the Incentive Audit: a four-mode diagnostic for identifying the hidden reward structures that explain behavior no amount of talent, ethics, or intelligence can override.

Benjamin Franklin, 1778
Benjamin Franklin · 1706–1790
New York Stock Exchange Trading Floor

Wall Street · The Architecture of Incentives

1963

“This is a day you will all remember forever.”

Adam Smith, 1787
Adam Smith · 1723–1790

A surgeon in Lincoln, Nebraska spent years sending bushel baskets full of normal gallbladders down to the pathology lab at the leading hospital in town. Not diseased gallbladders. Normal ones. Healthy organs removed from patients who didn’t need the surgery, by a doctor who was paid per procedure. The hospital, operating with the kind of permissive quality control for which community hospitals have always been famous, let it continue for years before removing him from the medical staff.

Charlie Munger grew up in that town. The surgeon was a family acquaintance. Munger watched the whole thing unfold, and from it he drew a conclusion that became the first principle of his analytical framework: incentive-caused bias is the most powerful force in professional life, and the person it corrupts most completely is the person doing the corrupting. The surgeon didn’t think he was removing healthy organs. He thought he was being thorough. The compensation formula had rewritten his clinical judgment so seamlessly that he couldn’t detect the distortion from the inside. Intelligence offered no defense. Neither did good intentions. Neither, for that matter, did a medical degree. This is the dark magic of misaligned pay: it doesn’t make you dishonest. It makes you wrong, with total confidence.

Ben Franklin put the operating principle more bluntly three centuries earlier: “If you would persuade, appeal to interest, not to reason.” Munger quoted Franklin constantly because Franklin had identified the load-bearing insight in nine words. Reason is what people use to explain their decisions. Interest is what actually drives them. The gap between those two forces is where most organizational failures live, where most negotiation mistakes are born, and where most strategic blind spots fester until they metastasize into something expensive. Evolutionary biologists have a term for organisms that signal one thing while doing another: mimicry. The corporate version is called a mission statement.

This is the Incentive Audit. It runs on a single diagnostic question: What is every party in this situation actually paid to do? Not what they say they value. Not what they intend. What the architecture of payments and penalties makes almost inevitable. The answer to that question will explain conduct that talent, character, and intelligence cannot.

And like the Inversion Stack, it comes in four modes.

Scalpel revealing embedded coins in tissue layers
01

Mode One

The Surgeon Test

Where is someone being paid in a way that could corrupt their judgment without them knowing it?

“I think I’ve been in the top five percent of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it.”
Charlie Munger

When the original Medicare law was passed, various experts, including Ph.D. economists, forecast its costs using simple extrapolations of past healthcare spending. Their projections landed at less than 10% of what actually happened. A tenfold miss, by people whose entire professional identity was built on the ability to predict costs. The error was not arithmetic. It was conceptual. The economists projected costs as if conduct would remain unchanged after the pay design transformed. Once Medicare installed new rules—patients insulated from costs, doctors paid per visit, hospitals paid for volume—conduct shifted to match the new payments, and the numbers became unrecognizable. This is the same mistake a Newtonian physicist makes when ignoring relativistic effects: the model works perfectly at low speeds and catastrophically at high ones. The Medicare economists were doing low-speed projections on a system that had just broken the sound barrier.

Economists have a name for this phenomenon. The Lucas Critique holds that any model calibrated on historical data will break the moment the underlying policy changes, because the policy change rewrites the conduct the model was built to predict. Munger’s version is simpler and sticks better: “Once they put in place all these new incentives, the behavior changed in response to the incentives.” Every operator who has launched a new pay plan and been surprised by the results has collided with this principle. Every one. The surprise is always the same: we changed the rules, and people changed what they did. As if that outcome were somehow unexpected.

Dan Ariely documented the same mechanism in corporate governance. The SHE Index in the United States counts the percentage of women on corporate boards and in senior positions. What you actually want to know is how women feel working at that company. But feelings are hard to measure. Board composition is easy to count. So companies optimize for the easy metric, check the box, and change nothing about the actual experience. The result: the index systematically underperforms the S&P 500 by a significant margin, because the payment created by the metric is to perform diversity rather than practice it. This is the corporate equivalent of studying for the test instead of learning the material. The transcript looks great. The education is hollow.

Charles Goodhart, the British economist, formulated the principle in the context of monetary policy in the 1970s, but it applies with brutal precision to every dashboard, OKR sheet, and performance review in corporate life: when a measure becomes a target, it ceases to be a good measure. The SHE Index measures board seats. It should measure employee experience. The distance between those two things is where the perverse payment takes up residence. If you have ever watched a team hit every KPI while the actual business deteriorated beneath them, you have watched Goodhart’s Law operate in real time. Congratulations. You were standing inside the Surgeon Test and didn’t know it.

The pattern across all three cases is identical. A measurement system creates a payment. The payment changes conduct. The conduct change is invisible to the people inside the system because they’ve adapted to the new gradient without noticing the adaptation. The surgeon believes he’s being thorough. The Medicare forecaster believes costs are predictable. The corporate board believes it’s progressive. Each is responding to an incentive formula they haven’t identified as an incentive formula.

Diagnostic

Name the three largest payments (financial, social, or reputational) flowing to the person whose judgment you are relying on. If any of those payments could plausibly shift the threshold of their professional judgment, you are inside the Surgeon Test whether you can see it or not.

Two clocks at different speeds with cracking gear
02

Mode Two

The Time Horizon Trap

What is the time horizon of each party’s payment, and where do those horizons diverge?

“The chains of habit are too light to be felt until they are too heavy to be broken.”
Warren Buffett

The average American in the private sector holds a job for 3.7 years. That single number, by itself, explains most of what is broken about the American health insurance system. If an insurance company knows it will churn you in 3.7 years on average, its calculus narrows to covering two categories: things that pay themselves back in less than 3.7 years, or things with such overwhelming demand from employees that employers feel they must cover them to remain competitive. Preventive care that pays off over a decade? Not covered. Chronic disease management that compounds value over twenty years? Not their problem. The arithmetic is precise and merciless. Nobody in the system is acting irrationally. Everybody in the system is acting exactly as the time horizon of their pay dictates. The system works perfectly for the people who designed it. It only fails the people trapped inside it, and those people have no voice in the design.

Jamie Dimon encountered the same architecture at J.P. Morgan, but from the other side of the table, as a designer who could fix it. When he arrived, the bank’s traders were paid 20% of profits with access to extreme borrowing ratios. The math was straightforward: if you’re running 30-to-1 and getting 20% of the upside, going to 40-to-1 adds 25% to your bonus. The pay model did not merely tolerate excessive risk. It made excessive risk the rational move for every individual trader, even though it could destroy the entire firm. Dimon eliminated the profit-pool formula and the borrowing allowances. He lost people in the process. But he had identified the architectural defect: when the time horizon of the bonus (one year) is shorter than the time horizon of the risk (decades), the payment will always favor the short bet over the safe one. This is not a character problem. It is a math problem. And math problems have math solutions.

This is the defect that makes quarterly capitalism so reliably destructive. The CEO whose pay vests over three years will make different decisions than the CEO whose pay vests over ten. Not because one is greedier. Because the comp plan has a clock built into it, and every rational actor reads the clock. Thermodynamics has a useful analog: entropy always increases in the direction of time’s arrow. In incentive physics, value always flows in the direction of the shortest payout cycle.

Rome’s political system embedded the same misalignment, and it produced a paradox that persisted for centuries. Roman senators were politicians in constant competition for public support. Any senator who signed a damaging peace treaty could be eviscerated in the Senate as a coward, a fool, or a traitor. The political calculus—survive the next election cycle—meant Roman senators would overwhelmingly vote to continue a war rather than admit defeat. This made Rome ferociously resilient in warfare. It also made Rome incapable of strategic retreat, even when retreat was the correct move. The time horizon of the political calculus (the next election) was shorter than the time horizon of the strategy (the next century of Roman power). The mismatch produced persistence that looked like strength but was often just an inability to quit.

Hannibal destroyed Roman armies repeatedly during the Second Punic War. Any other Mediterranean power would have sued for peace. Rome couldn’t. Not because Roman senators were braver than Carthaginian ones, but because the Roman senator who proposed peace would have been committing career suicide. The architecture made surrender impossible, which made Rome terrifying to fight, because opponents knew that defeating a Roman army did not end the war. Rome would simply raise another army and keep coming. The most powerful strategic commitment of the ancient world was one that nobody had chosen deliberately. It was a byproduct of the election calendar. Game theorists would recognize this as a credible commitment: valuable precisely because it cannot be revoked, and credible precisely because it was never a choice.

Diagnostic

Map every party in your deal, partnership, or organization on a timeline. Mark when each party’s payout arrives. Where the timelines diverge by more than 2x, you have found a Time Horizon Trap. The wider the gap, the more violently the parties’ actions will diverge, even if they share identical stated goals.

Hannibal crossing the Rhône during the Second Punic War
Hannibal Crossing the Rhône · Second Punic War · 218 BC
Canal lock system equalizing water levels
03

Mode Three

The Alignment Engine

Can I design an architecture where self-interest and the desired outcome become the same action?

“Never, ever, think about something else when you should be thinking about the power of incentives.”
Charlie Munger

Federal Express had one hell of a time getting its night shift to perform. The integrity of the entire system required all packages to be shifted rapidly among airplanes at a single central airport each night. The night shift was the bottleneck. Management tried moral suasion. They tried every motivational technique available. Nothing worked. Then someone changed the pay formula from hourly to per-shift: finish the sort, go home, get paid the same. The problem vanished overnight.

The workers hadn’t been unmotivated. They’d been rationally responding to a pay formula that compensated them for taking longer. Every minute of delay was another minute of wages. The motivational speeches, the managerial oversight, the team-building exercises were all aimed at the wrong target. Management was trying to change psychology when they needed to change the paycheck. Fix the architecture, fix the conduct. This is the lesson that a thousand corporate off-sites per year fail to absorb, probably because the off-site itself is paid for by the hour.

Lee Kuan Yew solved Singapore’s healthcare costs the same way: by redesigning who owns the money. In Singapore, you get a health savings account the day you’re born. If you don’t spend the money, you and your heirs get to spend it eventually. The money is yours. Munger noted that Singapore’s system costs 20% of what the American system costs and works better by most measures. The mechanism is architectural, not cultural. People act more sensibly when they’re spending their own money. The American system insulates patients from costs, which sends a signal to over-consume. The Singaporean system makes every dollar visible, which sends a signal to spend carefully. Same humans. Same biology. Same diseases. Different architecture. Radically different outcomes. Adam Smith would have predicted every detail of this from his armchair in Kirkcaldy. The information was available in 1776. We just keep refusing to use it.

Dee Hock, who built Visa, designed what may be the most elegant alignment architecture in corporate history. Visa is a for-profit, non-stock membership corporation. Your ownership consists of irrevocable, non-transferable rights of participation. You own it by participating in it. The percentage of ownership you hold is proportional to the volume you contribute to the network. You can’t sell it. If you stop participating, you stop owning. This single design choice prevents a cascade to the exits—the problem that destroys most networks when early participants cash out—while aligning every member’s self-interest with the network’s growth. Hock didn’t appeal to his members’ altruism. He built a vehicle where self-interest and network interest converged into the same action. No speeches required.

Capital Group, which has managed money for nearly a century, applied the same logic to portfolio management. At Capital Group, a manager running $50 billion and a manager running $5 billion receive the same bonus if their investment results and tenure are the same. The firm doesn’t pay for a money grab. You don’t have to manage a lot of money to do well for yourself. You have to do well on the assets you’re asked to manage. By decoupling pay from assets under management, Capital Group eliminated the perverse gravitational pull that plagues most of the asset management industry: the pull toward gathering assets at the expense of returns. The manager’s interest and the investor’s interest point in the same direction. That alignment has survived for a hundred years. Most fund shops don’t survive twenty. The industry thinks the problem is talent retention. Capital Group suggests the problem is pay design. They’ve been right for longer than most of their competitors have existed.

Diagnostic

Write down what you want people to do. Now write down what your compensation formula actually pays them to do. If those two lists don’t match, no amount of training, culture work, or motivational programming will close the gap. Redesign the formula. The behavior will follow.

23 Wall Street – J.P. Morgan & Co. Headquarters

23 Wall Street

J.P. Morgan & Co. · The Room Where Incentives Were Redesigned

President Kennedy signing the Interest Equalization Tax, 1963

“If you would persuade, appeal to interest, not to reason.” — Benjamin Franklin, 1776

Domino cascade with exponentially growing pieces
04

Mode Four

The Second-Order Cascade

If I change this payment formula, what conduct will change in response, and what will that conduct change cause?

“To every action there is always opposed an equal reaction.”
Isaac Newton

On July 18, 1963, President Kennedy proposed an Interest Equalization Tax to throttle the outflow of American dollars. The tax penalized the sale of foreign securities to American investors. Henry Alexander, then running J.P. Morgan, assembled his officers that afternoon and made a prediction that proved exactly right: “This is a day you will all remember forever. It will change the face of American banking and force all the business off to London. It will take years to get rid of this legislation.”

Kennedy’s intention was to keep capital at home. The actual effect was to build London into a global financial center, because the payment created by the tax was for banks to move operations offshore, outside the tax’s reach. The entire Eurodollar market, which today dwarfs American domestic banking in daily transaction volume, traces its origins to this single miscalculation. Alexander saw it the afternoon the proposal was announced. Kennedy’s advisors never saw it at all. The difference came down to a single cognitive habit: asking the second question. And then what happens? Newton’s third law applies to economics with the same inexorability it applies to physics. Every policy action generates an equal and opposite market reaction. The only variable is whether the policymaker bothered to compute it.

Sam Altman identified a contemporary version of the same cascade in the architecture of search advertising. ChatGPT consistently ranks as the most trusted technology product from a major tech company, which is odd, because AI is the technology that hallucinates. Altman’s explanation: the business model. Google’s ad revenue depends on search results being imperfect. If Google gave you the perfect answer immediately, there would be no reason to buy the ad that sits above it. Users sense this misalignment even if they can’t articulate it. ChatGPT, by contrast, is paid by the user, which means its commercial interest is to give the best possible answer. Trust follows alignment, not accuracy. The second-order effect of Google’s ad model extends well beyond the existence of ads. The entire product is constrained from being as good as it could be, and hundreds of millions of users feel the drag without being able to name it. This is Upton Sinclair’s famous observation updated for the platform era: it is difficult to get a search engine to give you the right answer when its revenue depends on giving you an almost-right answer surrounded by sponsored alternatives.

Apple’s App Store produced the same pattern from the platform side. As Apple discovered the toll-booth economics of taking 30% of every transaction on the most important computer most people own, the drive to innovate on the platform itself eroded. Why improve the product when the revenue comes from taxing other people’s products? DHH described what happened next: “There was a rot that crept in to the foundation.” The 30% cut didn’t just redistribute profits. It changed what Apple cared about. A platform that once defined innovation gradually became a rent-collection operation, because the economics paid more generously for collecting rent than for building. Resistance to improvement is the natural second-order effect of any toll-booth model: every improvement to the platform increases the value flowing through it, which increases the value of the toll, which increases the motivation to protect the toll, which decreases the motivation to do anything that might disrupt it. The feedback loop tightens until the company’s entire strategic posture is defensive. Apple didn’t decide to stop innovating. The economics decided for them.

Munger named the compound version of this pattern: Serpico Syndrome. Frank Serpico joined a near-totally corrupt New York police division. The corruption was driven by two forces operating simultaneously: social proof (everyone around you is doing it) and financial pressure (doing it pays better than not doing it). Either force alone might be resistible. Together, they create a system so powerful that the person who resists it is nearly murdered for his trouble. The second-order cascade goes beyond the spread of corruption. The system develops antibodies against the people trying to stop it, because everyone inside the system now shares the same payment for maintaining the status quo. Whistleblowing becomes the irrational act. Compliance becomes the rational one. The cascade is complete when even the honest people can’t afford to stay honest. If you have ever sat in a meeting where everyone knew the initiative was failing and nobody said a word, you have visited a mild suburb of Serpico Syndrome.

Diagnostic

Before implementing any policy change, draw two columns. Column one: the conduct this new payment will create. Column two: the conduct that new conduct will create. If you cannot fill column two, you are Kennedy’s tax advisor. If you can, you are Henry Alexander. The difference is thirty years of global financial history.

Rube Goldberg feedback loop with amplification

Integration

The Cascade

The four modes above work in isolation, but they produce their sharpest results when run in sequence against a single decision.

Mousetrap where bait and trigger are fused
×

Failure Mode

The Trap

The Incentive Audit fails when you use it to find evidence for the conclusion you’ve already reached. The audit works only when you point it at yourself first.

Building section-cut revealing hidden plumbing infrastructure

Application

In Practice

The modes above work in isolation, but they produce their best results when run in sequence against a single decision.

Two scenarios show what that looks like in real time.

Evaluating a Vendor Partnership

Your company is considering a three-year contract with an outsourced customer support vendor. The vendor’s pitch is compelling: lower costs, faster response times, proven track record.

You run the audit.

The Surgeon Test

How is the vendor paid? Per resolved ticket. What does that formula do to the definition of “resolved”? It makes the vendor want to close tickets fast, even if the customer’s problem persists. You ask for data on ticket reopen rates. They’re 34%, meaning one in three customers has to call back. The vendor’s “faster response times” metric is an artifact of premature closure, not a reflection of service quality. The metric looks good. The experience is terrible. Goodhart’s Law, operating exactly as predicted. The vendor isn’t lying. The vendor is optimizing.

The Time Horizon Trap

Your contract is three years. The vendor’s account manager is compensated annually on revenue growth. That account manager has no reason to invest in systems that improve quality beyond the current year, because quality improvements take eighteen months to surface in metrics and the bonus resets every January. The misaligned horizons mean the vendor will optimize for what shows up in this year’s review, not what creates value over the contract term.

The Alignment Engine

You restructure the proposal. Instead of per-ticket pricing, you offer a flat monthly fee with a quality bonus tied to customer satisfaction scores and first-contact resolution rates. Now the vendor makes more money when customers are actually satisfied, not when tickets are technically closed. The payment and the outcome converge. You have not changed the vendor’s character. You have changed the architecture that shapes the vendor’s behavior.

Three modes. Twenty minutes. The audit didn’t tell you what to do. It told you what you were missing.

Designing a Sales Commission Plan

Your 80-person company is building its first formal sales comp plan. The default: base salary plus commission on closed revenue.

The Surgeon Test

Commission on closed revenue creates a payment for closing any deal, regardless of fit. Your salespeople will sell to customers who shouldn’t buy your product, because bad customers generate the same commission as good ones. A year later, churn spikes and your customer success team is overwhelmed. The sales comp plan created a downstream crisis that nobody in sales will feel, because the commission already paid out.

The Time Horizon Trap

Monthly commission payouts with annual quotas create a December cliff. Reps will pull deals forward into Q4 to hit quota, even if the customer would be better served by a January start. The mismatch between the payout cadence (monthly) and the customer lifecycle (years) guarantees quarter-end behavior that erodes long-term value. Every SaaS company on earth knows this. Most of them keep the same comp plan anyway.

The Alignment Engine

You tie 30% of commission to twelve-month retention. Reps still earn on closed revenue, but a third of the payout vests only if the customer is still paying a year later. Now the salesperson’s payment includes customer quality, not just customer acquisition. The bad-fit deal that would have paid full commission now pays 70%, and the rep learns, quickly, to qualify harder. You haven’t hired better salespeople. You’ve built a better architecture around the salespeople you have. The rep hasn’t changed. The spreadsheet has.

The Battle of Zama – Second Punic War
The Battle of Zama · 202 BC · The Architecture Made Surrender Impossible

When to Deploy

The Surgeon Test

Whenever you’re relying on someone’s professional judgment and they’re compensated based on the volume or direction of that judgment. Doctors paid per procedure. Advisors paid on commission. Analysts paid for deal flow. The question is not whether these people are ethical. The question is whether the compensation formula is subtly redefining what “good judgment” looks like inside their heads. The gallbladder surgeon was ethical. He still removed healthy organs. Ethics is not a firewall against a misaligned paycheck. It is, at best, a speed bump.

The Time Horizon Trap

Whenever two parties in a relationship have different time commitments. Employee tenure versus company strategy. Fund manager fees versus investor returns. Contractor timelines versus project quality. Where the horizons diverge, the actions diverge, and actions follow reward design with the reliability of water flowing downhill. You do not need to question anyone’s motives to use this mode. You only need a calendar and a comp plan.

The Alignment Engine

Whenever you’re designing a system and have the power to set the reward design. Comp plans. Partnership terms. Organizational design. The question is not “how do I motivate people?” That question leads to posters on walls and speeches at off-sites. The question is: “How do I build a system where the thing I want and the thing they want are the same thing?” Dee Hock answered that question. Lee Kuan Yew answered it. The FedEx night shift manager answered it. The answer is always architectural, never inspirational.

The Second-Order Cascade

Before any policy change, regulatory shift, or organizational reorganization. Ask: if this new payment takes hold, what new behaviors will emerge? And what will that new conduct cause? Kennedy’s tax advisors asked the first question but not the second. The answer to the second question was the Eurodollar market. Henry Alexander asked both questions in a single afternoon and predicted the next thirty years of global finance from a conference room at 23 Wall Street.

Connected Models

The Inversion Stack. The Incentive Audit is the diagnostic companion to Inversion. Where the Inversion Stack asks “what could go wrong?”, the Incentive Audit asks “what is the system paying people to do?”—which often reveals exactly what will go wrong and why. Run them in sequence: invert first to identify failure modes, then audit the reward design to see which failures the current system is actively funding. The combination catches problems that neither tool surfaces alone, because some failures are not random. They are purchased.

The Asininity Catalog. Most of the errors Munger collected over a lifetime of observation were incentive-driven at root: smart people making terrible decisions because the incentive formula pointed them at the wrong target. The Surgeon Test feeds directly into the Asininity Catalog’s classification system. Every asininity has a cause. The Incentive Audit finds it. When you encounter organizational stupidity that persists despite intelligent people and good intentions, stop asking “why are they so dumb?” and start asking “what are they being paid to do?” The answer will be clarifying and frequently depressing.

The Reliability Compound. The operators who consistently align rewards with outcomes are the ones whose track records compound over decades rather than collapsing in a single spectacular quarter. Capital Group, Visa, Singapore’s healthcare system: these are not motivational stories. They are engineering documents. The Incentive Audit identifies misalignment. The Reliability Compound is what happens when you fix it and leave it fixed for fifty years.

Every model in this system touches incentives eventually. The Incentive Audit is where you start when you need to understand why rational people are producing irrational results. The answer, almost always, is that they aren’t producing irrational results at all. They are producing exactly the results the architecture pays them to produce. You just hadn’t read the architecture correctly.

Sources

[1] Munger, Charlie. “The Psychology of Human Misjudgment.” Poor Charlie’s Almanack, 1995.
[2] Munger, Charlie. “USC Commencement Address.” University of Southern California, 16 May 2007.
[3] Munger, Charlie. “The Art of Stock Picking.” Delivered at USC Business School, 1994.
[4] Ariely, Dan. “The Human Capital Factor.” Capital Allocators, hosted by Ted Seides, Ep. 195, 17 May 2021.
[5] “The Time Horizon Problem in Health Insurance.” Podcast discussion, 2025.
[6] Dimon, Jamie. “How J.P. Morgan Became the Most Dominant Bank in America.” Acquired, hosted by Ben Gilbert and David Rosenthal, 15 June 2025.
[7] “The Roman Republic and the Incentive Structure of Warfare.” History podcast, 2025.
[8] Munger, Charlie. “Daily Journal Annual Meeting 2021.” Daily Journal Corporation, 24 Feb. 2021.
[9] “Visa.” Acquired, hosted by Ben Gilbert and David Rosenthal, 2 Apr. 2024.
[10] Gitlin, Mike. “The Century of Capital Group.” Capital Allocators, hosted by Ted Seides, Ep. 479, 5 Jan. 2026.
[11] Chernow, Ron. The House of Morgan: An American Banking Dynasty and the Rise of Modern Finance. Grove Press, 1990.
[12] Altman, Sam. “Sam Altman on Trust, Persuasion, and the Future of Intelligence.” Conversations with Tyler, hosted by Tyler Cowen, 5 Nov. 2025.
[13] Hansson, David Heinemeier. “DHH: Ruby on Rails, Basecamp, Remote Work, and the Future of Programming.” Lex Fridman Podcast, 28 Oct. 2025.