Thinking and Open Ideas
Current open questions
- CRITICAL: Which holding formalization? Three candidates from ChatGPT (see feedback.md for details):
- Option C (single case-normal halfspace) as main model spine — maximally tractable, Props 1–3 + corollary fall out immediately, but no strategic holding-writing in C1
- Variant A (chosen rationale) as extension — adds breadth knob and equilibrium holding choice (Prop 4)
- Variant B (minimal doctrinal rule) — high legal realism but less strategic content
- Recommended packaging: C for main theorems → A as extension for strategic breadth
- Decision needed before any formal work can proceed
- CRITICAL: Fix the entailment/no-dicta definition — current $\mathcal{F}_t(d_t; z_t) \subseteq H_t$ makes holding a superset (can't cut anything); need to reverse direction (see feedback.md for fix)
- How to endogenize overruling and sanctioning — make sanctions the outcome of coordination among peers/panels/higher courts?
- Should the overruling cost depend on age of precedent, number of citations, degree of reliance?
- How to reconcile the formal model (paper/paper.tex) with the informal coordination/focal-point framing (paper/intro.tex)?
Possible directions
- Close the model with specific constraint languages (polytope updates) and derive equilibrium holdings
- Develop the Schelling/focal-point connection from intro.tex into the formal framework
- Add a panel/collegial court extension where multiple judges must agree
- Explore convergence properties: when does the feasible set shrink to near-singleton vs remain broad?
- Consider vertical stare decisis (hierarchical courts) as an extension
Connections to literature
- Schelling focal points — law as coordination device
- McAdams — expressive law and focal points
- Attitudinal model (Segal & Spaeth) — ideology drives decisions; our model nests this when feasible set is large
- Strategic model (Epstein & Knight) — judicial strategy; our model adds holdings as strategic instruments
- Epstein & Posner (2016) — loyalty to president; old empirical project, may connect to binding force question
- Legal formalism vs realism debate — our model reconciles: law constrains but doesn't determine
Methodological sketches
- Empirical test: ideology should predict outcomes more strongly when doctrine is unsettled (large feasible set)
- NLP-based breadth measures for holdings — proxy for constraint breadth $B(H_t; \mathcal{F}_t)$
- Random judge assignment to panels as quasi-experimental variation in doctrine evolution
- Longitudinal tracing of specific doctrinal areas (Equal Protection, Due Process) as feasible-set evolution
Importable mathematical frameworks
Several existing frameworks provide machinery we could use directly for formal results, rather than building from scratch.
Convex body shrinkage (Grünbaum 1960; Bertsimas & Vempala 2004)
- Key result: any halfspace through the centroid of a convex body in $\mathbb{R}^d$ cuts volume by at most $1 - 1/(d+1)$
- Gives binding force as volume reduction per holding — a ready-made proposition
- Dimension dependence: in higher $k$, each cut removes a smaller fraction → law harder to pin down in doctrinally complex areas
- Early-precedent disproportionality: volume removal is largest when $\mathcal{F}_t$ is large, so early holdings matter most
- These are proven theorems — can cite and apply directly
VC dimension for affine classifiers (Vapnik 1995)
- VC dimension of affine classifiers in $\mathbb{R}^k$ is $k+1$
- Formal lower bound: need at least $k+1$ "informative" holdings to pin down the law
- Measures expressiveness of the constraint language
- Sample complexity results translate to "how many precedents until the feasible set is small"
- Off-the-shelf; would give clean propositions with minimal new proof
Niblett, Posner & Shleifer (2010) as the $k=1$ base case
- Literature.md notes "our base model is essentially Niblett's"
- Could present 1D version as known benchmark, then show what new phenomena emerge in $k \geq 2$: non-convergence, drift, breadth tradeoff becoming nontrivial
- Clean paper structure: 1D → kD extension
Cutting-plane convergence (Kelley 1960)
- Well-characterized convergence rates for how many cuts needed to $\varepsilon$-approximate a target
- Translates to "how many precedents until feasible set diameter falls below $\varepsilon$"
- Gives formal rate of doctrinal rigidification
Callander & Clark (2017) as direct predecessor to extend
- Already have multi-dimensional fact space + shrinking feasible set in a game-theoretic setup
- What they don't have: (a) holdings as objects separate from outcomes, (b) the breadth tradeoff, (c) overruling
- Could frame model as "Callander & Clark plus holdings-as-constraints" and inherit their baseline results on path dependence and feasible-set dynamics
Set-membership estimation (Schweppe 1968; Milanese & Vicino 1991)
- Control-theory literature studying exactly our mathematical object: unknown parameter vector in polytope shrinking via linear constraints from new observations
- Ready-made tools for polytope diameter, Chebyshev center, convergence rates
- Less known in economics — nice bridge to cite
Assessment: most tractable path is probably to build on Niblett / Callander & Clark tradition (economists recognize the setup), then import convex geometry and VC dimension results for comparative statics essentially for free. Version-space / cutting-plane connection gives formal language for propositions without proving underlying geometry ourselves.
Ideas to explore later
- Connection between model and the "slippery slope" literature
- Whether the model can explain why some areas of law are more rule-like (clear constraints) vs standard-like (broad feasible set)
- Relationship between number of dimensions $k$ and binding force
- Institutional design implications: what makes legal systems more or less binding?
Miscellaneous notes
- The paper currently has two substantive applications (Equal Protection, Due Process) — may want to trim to one or add more structure
- intro.tex and paper/paper.tex represent two different framings of the same project; need to integrate
Formalizing holdings
Monotonicity idea
Monotonicity can be seen as one way of formalizing "holding": all cases beyond this threshold must be "grant." This captures the intuition that a holding draws a line in fact space and determines outcomes on one side of it. If the holding is monotone in the relevant dimensions, it means that making the facts "stronger" (moving further along the dimension) cannot flip the outcome.
Neighborhood / decaying binding force (Holger idea)
Instead of a holding binding globally, assume that cases only bind in a neighbourhood of their fact pattern, or that binding force decays with distance in fact space. This captures the intuition that a precedent is most constraining for similar cases and loses force as new cases become more factually distant. Could be formalized as a kernel or distance-weighted constraint.
Norm enforcement among judges
Kandori-style enforcement
For judges, the "viral tweet moment" — where some deviation from the law is made public — could serve as the Kandori marking device. Judges might shy away from deviation to avoid being associated once that moment comes. Or they can just claim to be shocked (like colleagues of the Stavanger lawyer). If you see something, you are obliged to report.
Low cost of enforcement
When you decide who to invite for dinner, it is not costly to just invite someone else if you think one person is "tainted." So norm enforcement is not necessarily costly — the cost of excluding a norm-violator from social/professional circles is low for any individual enforcer.
Socialization into law
- Socialization into legal norms matters (Federalist Society as example of organized socialization)
- Judges are embedded in professional communities that reinforce certain interpretive commitments
- This could be modeled as a prior over admissible theories that judges bring to the bench
Slippery slope connection
- The model incorporates the slippery slope idea: where law ends up might depend on the density of cases (X) in different regions of fact space
- Early cases in one direction can make it progressively harder to resist further movement in that direction
- This connects to path dependence in the formal model
Jan 12 conceptual outline (11 points)
- Fact patterns are highly multidimensional and each past case is just one mapping from fact space to a decision. As a consequence there would always be multiple legal theories / models that could fit past cases perfectly. In and of itself, "following precedent" is not binding, since you never see a new case with exactly similar facts.
- For precedents to have binding effects they must also provide guidance on how to extrapolate from that case — what really are the facts relevant for the decision (the "holding"). A theory of the binding force of law must include a definition of "holding."
- Norm to follow past decisions (for good reasons — legal stability + judges should "just apply the law").
- The law must have (and has) a way of dealing with errors (overruling and distinguishing).
- Reasonable judges can disagree about which "model" fits the data best.
- Cases (and doctrine) have summary statements (or "restatements") about what the law is — a simplified model that fits past cases reasonably well. One must also allow for such statements to be "erroneous."
- Judges have preferences among competing models, but also want to make sure it looks like they are "just applying the law." So they are constrained to pick among models that fit the data.
- If forces in society are relatively equal, could get a situation where two competing theories coexist.
- Hierarchical stare decisis is not our interest — reversals etc provide incentives to lower court judges to follow precedents.
- Courts could in theory make the law crystal clear. But tradeoff between efficiency and clearness. Rules vs standards discussion. Many examples of SCOTUS coming up with bright-line rules when standards have failed (e.g., Miranda). But rules are brittle, gameable, non-adaptive, and usually also statically inefficient.
- Judges can always engage in fact discretion, but that won't change the law.
Bright-line rules as self-binding
The Supreme Court is acutely aware that lower-court judges differ ideologically, appellate review is imperfect, standards invite discretion, and discretion invites ideological drift.
Bright-line rules reduce interpretive degrees of freedom, make deviations observable, increase reputational and reversal costs, and create focal points for coordination across courts. This is especially important in politically salient domains, constitutional rights, and criminal procedure.
The judge-constraint logic dominates at SCOTUS because:
- Institutional self-awareness: the Court knows lower courts are heterogeneous, knows it cannot monitor everything, knows standards drift over time. Bright-line rules are a cheap way to stabilize doctrine.
- Visibility and reversibility: easier to reverse a judge who ignored a clear rule; standards make deviations plausible and deniable.
- Legitimacy and blame shifting: bright-line rules allow judges to say "the law made me do it," protecting both lower courts and SCOTUS from accusations of ideology.
Why SCOTUS rarely says this explicitly: Openly saying "we adopt this rule to restrain judges" would undermine judicial legitimacy, admit indeterminacy, and invite political attack. So the Court talks about administrability, predictability, fair notice, ease of application — public-facing proxies for judge-constraint.
Synthesis for the project: The Supreme Court adopts bright-line rules when the coordination problem among judges is more severe than the information loss from rigidity. Or more starkly: bright-line rules are a technology for disciplining adjudicators under conditions of disagreement and limited monitoring.
Litigation underenforcement (secondary mechanism)
Standards increase uncertainty → uncertainty raises expected litigation cost → risk-averse plaintiffs don't file → violations go unchallenged → law becomes underenforced. Bright-line rules reverse this chain. But the Court tolerates underenforcement more readily than ideological drift, so this mechanism is secondary.
Interaction between the two mechanisms
Judge constraint → predictable outcomes → more litigation → more data → clearer law. The mechanisms reinforce each other but are not symmetric.
Higher courts and strategic bright-line rules
Higher-court judges can have strategic incentives to announce bright-line rules partly to constrain lower-court judges whose ideological priors they don't trust. When higher courts anticipate noncompliance or "slippage" by ideologically distant lower courts, they have reason to write more constraining doctrine.
What prevents overuse: Error costs and injustice at the margin (bright lines can be badly over/underinclusive); unanticipated future fact patterns (rules age poorly); need to assemble and maintain a majority (broad hard rules can lose swing votes); legitimacy concerns (looking like "legislating"); diminishing returns (even bright lines leave room for disagreement about classification and framing).
Vagueness as design choice / option value
Vagueness is often a feature that preserves equilibrium, not a failure of language or logic.
Law is vague when: coordination is stable without precision; local information matters; adaptation is valuable; monitoring is costly but tolerable.
Law becomes crisp when: coordination is fragile; ex ante guidance is critical; actors have incentives to defect; errors are catastrophic or systematic.
This exactly explains Miranda versus negligence.
Important caveat: Because the world is open-ended, no finite rule system can anticipate all future factual configurations without either becoming infinitely complex or reintroducing discretion via catch-alls. "Crystal clear in all situations" is achievable only relative to a given state of the world. But that's a practical limitation, not an inherent indeterminacy of law.
Framing thoughts
- Frame the paper as (i) a way of formalizing existing accounts, (ii) formalization makes it easier to see implications, (iii) connect with data
- Judges stating rules (like Learned Hand) can be seen as summarizing their interpretation of pre-existing case law, potentially seeking to freeze their model
- The model explains why such activities are an important part of the precedent system
- Doctrinal work serves the same role — compressing and freezing interpretations
- This may help address critiques about novelty
Alternative introduction framings (model-selection / hypothesis-class)
Framing 1: Full model-selection version
"The law" as a hypothesis class of decision rules over a multidimensional case space. Past decisions populate this space with noisy data. An admissible legal theory is any decision rule that fits the core of the precedent set sufficiently well. The threshold of admissibility is determined by professional norms and the risk of sanctions. A judge deciding a new case selects a theory from this admissible set, trading off: (i) fidelity to precedent (adequate fit), (ii) preference for simple, coherent functional forms (bright-line rules, linear thresholds), and (iii) substantive outcome preferences. Past precedents that don't fit are rationalized as "errors," distinguished on narrow grounds, or explicitly overruled. Landmark reversals (like the treatment of Roe) are episodes where a new majority reclassifies a cluster of precedents as errorful data points and re-fits the underlying model.
Three central implications
Reconciles strong constraint with persistent polarization. Law constrains by shrinking the admissible set. Yet within the admissible region, ideology matters. Disagreement is most intense exactly where the law is most underdetermined.
Explains observed simplicity of legal doctrine. Courts prefer low-dimensional, easily communicable decision boundaries (Hand-rule style linear tradeoffs, categorical thresholds) even when more complex mappings would better fit past cases. Simplicity preferences are a central determinant of which theories are selected.
Generates path dependence and regime shifts. Early cases and random judge assignments push the legal system toward one region of model space. Composition changes trigger abrupt doctrinal shifts, as a new majority selects a different admissible theory and recodes existing outlier cases as mistakes.
Framing 2: More modest version
We do not claim to discover a new philosophy of adjudication. Jurisprudence has long emphasized that legal materials underdetermine outcomes, that interpretation trades off "fit" with justification and coherence, and that precedent is often permissive rather than strictly mandatory. Our contribution is to formalize these familiar ideas in a simple model-selection framework and connect them to observable data. Activities that loom large in legal practice — stating general rules (Hand formula), systematizing doctrine, writing restatements — are not mere rhetoric. They are attempts to freeze particular models of the case law into explicit decision rules that narrow the set of admissible interpretations for future judges.
Why EP doctrine is not settled (model-consistent explanation)
Despite decades of litigation and canonical tests, EP doctrine is not settled because precedent constrains the form of adjudication rather than its substance.
Key mechanisms:
- Doctrine settles form, not content. The hypothesis class is fixed; the optimal hypothesis is not. Tiers, tests, and named elements are settled. But what counts as compelling, how much fit is enough, and which factual dimensions deserve weight — these are not.
- Dense but underdetermining in high-dimensional fact space. EP cases vary along dozens of dimensions (facial vs functional classification, type/degree of stigma, nature of government interest, availability of alternatives, institutional setting, political/historical context). Even hundreds of cases don't densely cover this space.
- Tests are elastic by design. "Compelling," "important," "substantially related," "narrowly tailored," "discriminatory purpose" — these are containers that absorb disagreement while preserving the appearance of constraint. This allows adaptation, communicability, and legitimacy across ideological coalitions.
- New cases keep introducing new combinations of old dimensions. Race + technology, sex + athletics, sexual orientation + religious exemptions. Precedent rarely resolves how dimensions interact.
- EP is a site of ongoing moral and political disagreement. Persistent disagreement is expected where law is used to mediate unresolved normative conflict.
- The admissibility threshold prevents convergence. Judges must justify decisions as lawful, but many justifications remain plausible. This creates bounded disagreement rather than convergence.
- Composition changes reset which models are focal. A new majority reweights dimensions, reclassifies prior cases as "misapplied" rather than wrong, producing regime shifts without doctrinal collapse.
Summary: Despite decades of litigation, EP doctrine is not settled because precedent constrains the form of adjudication rather than its substance. Courts have fixed the hypothesis class — tiers of scrutiny, required elements, permissible evidentiary dimensions — but not the weights placed on those dimensions or their interaction in novel factual settings. In a high-dimensional fact space, even dense precedent leaves many admissible legal theories.
Model implications (5 big-picture)
Law both constrains and underdetermines. Judges can't pick any rule; they must choose from models that fit precedent "well enough." But within the admissible set, preferences (ideological, policy, distributive) matter.
Disagreement is endogenous and lawful. Liberal and conservative judges can both be "following the law" in a meaningful sense: they pick different admissible models, not pure willfulness. Polarization is strongest where the admissible set is large; consensus where it is small.
Precedent is partly data, partly noise. Some precedents are treated as informative constraints; others as "error" or anomalies. Overruling/distinguishing is literally "data cleaning" in model space.
Doctrinal shape is partly a simplicity prior. Courts prefer simple, stable rules (linear thresholds, bright lines) as long as they fit. When enough anomalies accumulate, they switch to a new, often still simple but differently oriented model.
Path dependence and regime shifts. Early cases and random judge assignment can lock in a region of model space. Composition shocks (new majority) trigger a re-fit that reclassifies earlier cases as errors, causing abrupt doctrinal shifts.
Testable implications
A. Local underdetermination and polarization
Hypothesis: Cases in parts of the feature space where many models fit past decisions equally well will exhibit more frequent ideological splits, more dispersion across judges, and more instability over time.
How to test: Train multiple predictive models on past decisions (different algorithms, regularization, random seeds, feature subsets). For each new case, compute variation in predicted probabilities across models = measure of local underdetermination. Regress indicator for ideological split / vote margin on local model disagreement, controlling for salience, issue area. Prediction: higher model disagreement → higher probability of 5-4 along ideological lines.
B. Simplicity bias in doctrine
Hypothesis: Given the same set of precedents, the doctrine the court articulates will be simpler than what a purely predictive ML model would choose, even at the cost of some fit.
How to test: Pick an area with a crisp stated rule. Estimate a flexible model (random forest, boosted trees) and a simple model (linear threshold, 1-2 variables). Compare the court's stated rule (as a simple model) vs more complex ML models. Look for systematic underuse of available predictive structure — the court's "doctrine boundary" will typically look more linear / low-dimensional than the best predictive boundary.
C. Error-labeling and regime shifts
Hypothesis: When a court's composition shifts, opinions by the new majority will use more language framing old precedents as "errors" / "misreadings" / "departures from principle," and will especially target cases that generate large misfit relative to the new majority's preferred model.
How to test: Identify composition changes (before/after key appointment). Use text analysis on majority opinions: build dictionaries of error/correction language. Correlate increase in error-language with overruling, narrowing, or distinguishing older cases. Optionally link to model fit: fit a model on post-change cases, compute residuals for earlier ones, test whether high-residual precedents are most likely flagged as erroneous.
D. Case density and convergence
Hypothesis: As more cases accumulate in a region of feature space, the admissible model set shrinks and ideological dispersion decreases, overrulings become rarer, and doctrinal shifts require bigger "error corrections."
How to test: For each case, construct measures of local precedent density (number of prior cases with similar fact patterns, nearest neighbors in feature space). Relate precedent density to probability of ideological split, frequency of overruling, size of doctrinal shifts.
E. LLMs as measurement tools
Use an LLM (fine-tuned on past decisions) as a measurement device for the admissible model set. Perturb the prompt or training data slightly and see how often the model's prediction for a case flips. High flip frequency = locally underdetermined region. Then check whether those cases are exactly the ones where human judges most often split along ideological lines. This gives a "computational jurisprudence" angle: LLM instability as a proxy for legal underdetermination.
Requirements for model
- For laws and precedents to have an effect we need multiple equilibria. Global games etc have just one equilibrium.
- Accommodate slippery slope idea.
- Must have the intuition that judges might defect if the temptation is high enough (stakes are very high, like Bush v Gore).
- Theory likely needs the idea that some equilibria are easier to sustain when under pressure.
- Must have that a judge follows the law because of potential sanctions if not. Other judges must monitor + have incentives to sanction if the first judge deviates from "the law."
- Deciding in panels helps with monitoring. For trial judges, appeals and reversals play the same role. (This explains why panels are needed at the top, and why single-judge decisions at the top, such as in Brazil, are dangerous for the rule of law.)
- The law has a lot of vague words like "reasonable" — but judges might assign objective and clear interpretations of those terms through precedents.
- Precedents usually narrow down the possible interpretations, but could also open up room for discretion.
Terms that have drifted vs remained stable
Terms that have drifted from original meaning
- Due process — now includes "substantive due process"
- Consideration — originally "quid pro quo," now found in almost any exchange; a "ghost" concept that survives mostly for historical reasons
- Reasonable person — intended as objective measure of care; often functions as empty rhetorical placeholder for the judge's own normative judgment
- Public interest — so elastic that it often functions as a legitimizing formula rather than a meaningful standard
- Malice — originally genuine ill will; now a legal fiction in many contexts ("malice aforethought" no longer requires hatred)
- Mens rea — internal moral blameworthiness; now often redefined through statutory mental states or even strict liability
- Intent — frequently inferred from patterns or circumstances; functions as a label for responsibility, not a genuine inquiry into mental state
- Possession — originally physical control; now "constructive possession" can mean legal responsibility for something you didn't hold
- Consent — originally voluntary, informed agreement; now formalized through checkboxes, adhesion contracts, or implied behavior
- Notice — originally actual awareness; now "constructive notice" means you should have known
- Good faith — originally moral expectation of honesty; now defined minimally (not outright deception); a catch-all rhetorical clause
- Reasonable doubt — originally a moral safeguard of near-certainty; courts avoid quantifying it; empirical studies show it's often misunderstood
Terms that have NOT drifted
- Habeas corpus — "you shall have the body"; still exactly that
- Trespass — wrongful interference with another's person or property
- Forgery — making or altering a document with intent to deceive
- Perjury — lying under oath; still lying under oath
- Theft, contract, tort, oath — all retain essentially their original meanings
What makes terms stable
- Concrete referents — tied to observable acts ("lying under oath," "taking property")
- Institutional continuity — linked to specific legal rituals or procedures (writs, oaths, filings)
- Functional persistence — they serve enduring social purposes (property transfer, debt relief, dispute resolution)
Early model sketches
Holger's project description: "A Model of Law"
Three ingredients:
- Language is open to interpretation. The sense of words is not given but a matter of convention, which may change over time and depend on context. Interpretation can be modeled as a subjective distribution over other people's likely views of meaning. The distribution is neither flat nor degenerate — law provides both freedom and constraint.
- Own-side bias. Well-documented psychological phenomenon that skews individuals' subjective distribution over legal meaning towards their content preferences (political preferences, position in litigation). Explains partisan splits on SCOTUS.
- Deference to authority if the alternative is worse. People's relationship to law as a choice between the legally structured status quo and an unknown alternative (anarchy/turmoil). Decision-makers take decisions that a sufficiently large number of people will view as plausibly guided by the law. The binding force of law arises from the threat of upheaval when a decision seems too far out of line.
Key claims:
- Cases heard by SCOTUS are selected for being hard and politically salient → own-side bias is strong and the case falls between the means of two political groups → stark splits are possible even while the overall dispersion of interpretations is low and the law has strong force almost everywhere
- Strong self-fulfilling prophecies in legal education: the more legal education stresses openness of legal materials, the higher the variance of new lawyers' priors, and hence the more decisions can vary without triggering the "upheaval response"
- Hart's core/penumbra and Dworkin's integrity appear as special cases
- Unlike existing models (Gennaioli & Shleifer 2007; Fernandez & Ponzetto 2012), which postulate that the law is known but changeable at a cost, this model builds on the idea that the law is always more or less unclear and judges can effect change because it's never clear whether they are changing or applying it
Pseudo-model setup (from Holger)
- Players: N >= 2, with preferences over case outcomes (binary), rational except that signal interpretation is biased toward preferred outcomes. Sophisticated biased players (they know others are also biased).
- Action space: decide infinite series of cases — alternating or randomly assigned each period.
- Information set: all prior decisions + private signal (their interpretation of the statute). Meaning of all prior decisions = "the law" (includes all biases of prior judges, which induces noise and may prevent convergence). Could have initial information set (statute only), read by first judge per their signal, decision + statute read by next judge, etc.
- Equilibrium: cooperate by deciding in accordance with (your interpretation of) the law unless the other side is perceived to be deviating, in which case punishment period ensues (potentially grim trigger).
- Extensions: binary types (judges who decide cases vs others who assess faithfulness and punish); continuous types (more/less influential judges).
- Needs a state variable to be tractable.
Dec 5 2025 model sketch
Model 1: Judges learning from past decisions
Assumptions:
- Facts in case $i$ observed by judge $j$ as $X_i + \varepsilon_{ij}$ where $\varepsilon_{ij}$ is iid across judges and cases. They also see the decision in past cases ("grant" or "deny").
- Judge $j$ at time $t$ "grants" if $X_i + \varepsilon_{ij} > X^*_{jt}$ for a threshold $X^*_{jt}$ = "that judge's interpretation of the law."
- A judge is punished if perceived by other judges to deviate "sufficiently" from their interpretation of the law.
- Judges select their threshold to avoid such punishment.
Expected results:
- A judge's threshold will roughly be determined by the point at which (the way this judge reads past cases) other judges tend to "grant." Newer cases given higher weights (since other judges' interpretation may change over time).
- In the long run, judges' thresholds will converge (since $\varepsilon_{ij}$ is iid) and the meaning of law will stabilize. There will still be uncertainty in factual interpretation of each case, but they will end up using the same threshold.
Adding judge biases: A richer model incorporates judge biases/preferences. Past cases are not perfectly informative of other judges' beliefs. In the long run one might statistically learn other judges' biases. Not clear what the consequences will be.
Issue: In this model, "stare decisis" is not assumed but appears as an endogenous feature. Not sure if that is good or not.
Model 2: Simplest possible model
- Cases drawn from $X \sim \mathcal{U}[0,1]$, randomly assigned to judges
- Each judge has a preference about where "the law" is — the level of $X$ at which defendant should be held liable
- They can see past decisions: $\{x_1, d_1\}, \{x_2, d_2\}$, etc.
- If they don't comply with past decisions they suffer a large reputational cost
Expected results:
- Judges impose their preferred decision as long as not blocked by precedent. E.g., Judge A prefers "liable" at $x = 0.4$; she holds the defendant liable iff no prior judge held a defendant not liable in a case with $x > 0.4$.
- The law is determined by initial cases and their judge assignments (path dependence).
- Understanding of the law converges over time.
Model 3: Extension with private information
- Each judge sees the case with error: $X + \epsilon$.
- Punished if they deviate too much from past cases (that cannot be justified by errors). Need to be precise about "too much" since there could be conflicting jurisprudence.
Expected results:
- Judges have incentives to lie about their signals in borderline cases
- But this builds jurisprudence that can justify even further changes
- Over time, judges' preferences can substantially change the jurisprudence
This model explains: (1) law has binding force in the short term, (2) preferences matter in borderline cases, (3) drift in jurisprudence over time.
Open question: How do judiciaries solve problems of conflicting jurisprudence in practice?