Measuring Goodhart’s law

OpenAI News
Measuring Goodhart’s law

Goodhart’s law famously says: “When a measure becomes a target, it ceases to be a good measure.” Although originally from economics, it’s something we have to grapple with at OpenAI when figuring out how to optimize objectives that are difficult or costly to measure.

Goodhart’s law⁠(opens in a new window)famously says: “When a measure becomes a target, it ceases to be a good measure.” Although originally from economics, it’s something we have to grapple with at OpenAI when figuring out how to optimize objectives that are difficult or costly to measure. It’s often necessary to introduce someproxy objectivethat’s easier or cheaper to measure, but when we do this, we need to be careful not to optimize it too much.

For example, as part of our work toalign⁠models like GPT‑3 with human intent and values, we would like to optimize things like “Howhelpful⁠is this response?”, or “Howfactually accurate⁠is this claim?”. These are complex objectives that require humans to carefully check things over. For this reason, we train a model to predict these human preferences, known as areward model, and use the reward model’s predictions as a proxy objective. But it’s important to keep track of how well the true objective is being optimized.

In this post we’ll look at some of the mathematics behind how we do this. We’ll focus on a setting that is particularly clean to analyze, in which we have access to the true objective. In practice, even human preferences can fail to measure what we really care about, but we’re setting that issue aside in this post.

## Best-of-n sampling

There are many ways in which one could optimize the proxy objective, but perhaps the simplest isbest-of-n n n sampling, also known asrejection samplingorreranking. We simply sample n times and take the one that scores the highest according to the proxy objective.

Although this method is very simple, it can actually be competitive with more advanced techniques such as reinforcement learning, albeit at the cost of more inference-time compute. For example, inWebGPT⁠, our best-of-64 model outperformed our reinforcement learning model, perhaps in part because the best-of-64 model got to browse many more websites. Even applying best-of-4 provided a significant boost to human preferences.

In addition, best-of-n n n sampling has reliable performance and is straightforward to analyze mathematically, making it well-suited to empirical studies of Goodhart’s law and related phenomena.

## The mathematics of best-of-n sampling

Let’s study best-of-n n n sampling more formally. Suppose we have some sample space S S S(such as the set of possible question-answer pairs), some probability distribution P P P over S S S,a true objective (or “reward”)R true:S→R R_{\text{true}}:S\to\mathbb R R true​:S→R, and a proxy objective R proxy:S→R R_{\text{proxy}}:S\to\mathbb R R proxy​:S→R. Let’s say that we somehow optimize R proxy R_{\text{proxy}} R proxy​ and thereby obtain some new distribution P′ P^\prime P′. Then:

It turns out that in the case of best-of- n n n sampling, both of these quantities can be estimated efficiently using samples from P P P.

Let’s look at the expectation first. The naive approach is to use a Monte Carlo estimator: run best-of- n n n sampling many times, measure the true objective on those samples, and average the results. However, there is a better estimator. If we have N≥n N\geq n N≥n samples from P P P overall, then we can simultaneously consider _every possible subset_ of these samples of size n n n, weight each sample by the number of subsets for which it is the best according to the proxy objective, and then take the weighted average true objective score. This weight is just the binomial coefficient(k−1 n−1) \binom{k-1}{n-1} (n−1 k−1​),where k k k is the rank of the sample under the proxy objective, from 1 1 1 (worst) up to N N N (best).A

The sum of these weights is(N n) \binom{N}{n} (n N​), giving a proof of theHockey-stick identity⁠(opens in a new window). For a formal derivation of the estimator described here, see Appendix I of theWebGPT paper⁠(opens in a new window).

As well as using samples more efficiently, this also allows us to reuse samples for different values of n n n. As for the KL divergence, surprisingly, this turns out to have an exact formula that works for any continuous probability distribution P P P (i.e., as long as P P P has no point masses). One might naively guess that the answer is log⁡n \log n lo g n,since best-of-n n n is doing something like taking the top 1 n \frac 1n n 1​ of the distribution, and this is roughly correct: the exact answer is log⁡n−n−1 n \log n-\frac{n-1}n lo g n−n n−1​. B

Together, these estimators allow us to easily analyze how the true objective varies with the amount of optimization applied to the proxy objective.

Here’s a real-life example fromWebGPT⁠:

## Going beyond best-of-n sampling

The main limitation of best-of-n n n sampling is that the KL divergence grows logarithmically with n n n, so it is only suitable for applying a small amount of optimization.

To apply more optimization, we typically use reinforcement learning. In the settings we’ve studied so far, such assummarization⁠, we’ve typically been able to reach a KL of around 10nats⁠(opens in a new window))using reinforcement learning before the true objective starts to decrease due to Goodhart’s law. We’d have to take n to be around 60,000 to reach this KL using best-of-n n n,and we hope to be able to reach much larger KLs than this with improvements to our reward modeling and reinforcement learning practices.

However, not all nats are equal. Empirically, for small KL budgets, best-of-n n n better optimizes both the proxy and the true objectives than reinforcement learning. Intuitively, best-of-n n n is the “brute force” approach, making it more information-theoretically efficient than reinforcement learning, but less computationally efficient at large KLs.C

_We’re actively studying the scaling properties of proxy objectives as part of our work to__align_⁠_our models with human intent and values. If you’d like to help us with this research, we’re__hiring_⁠_!_

1. A The sum of these weights is(N n) \binom{N}{n} (n N​), giving a proof of theHockey-stick identity⁠(opens in a new window). For a formal derivation of the estimator described here, see Appendix I of theWebGPT paper⁠(opens in a new window).

2. B Hint: express the PDF of the best-of-n n n distribution as a function of both the PDF and the CDF of the original distribution.

3. C Best-of-n n n is not necessarily optimal in the information-theoretic sense, however. For example, if P P P has a heavy right tail⁠(opens in a new window), then for any x>0 x>0 x>0 and any ε>0\varepsilon>0 ε>0, there is a distribution Q Q Q such that E y∼Q[y]>x\mathbb E_{y\sim Q}\left[y\right]>x E y∼Q​[y]>x and D KL(Q∥P)<ε D_{\text{KL}}\left(Q\parallel P\right)<\varepsilon D KL​(Q∥P)<ε (exercise).

Jacob Hilton, Leo Gao

Thanks to Suchir Balaji, Paul Christiano, William Guss, Vineet Kosaraju, John Schulman, Nisan Stiennon, Jeff Wu, and Daniel Ziegler for discussions related to the ideas in this post. Thanks to Greg Brockman, Jan Leike, Holly Mandel, John Schulman, and Jeff Wu for feedback on drafts. Thanks to Bianca Martin, Steve Dowling, Natalie Summers and Justin Jay Wang for communications and design.

Disrupting malicious uses of AI by state-affiliated threat actors Security Feb 14, 2024

Building an early warning system for LLM-aided biological threat creation Publication Jan 31, 2024

Democratic inputs to AI grant program: lessons learned and implementation plans Safety Jan 16, 2024

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.