Quant Research
Quant Research Interview

Quant Research Interview Guide

What quant research interviews actually test at Two Sigma, Citadel, Jane Street and DE Shaw — probability puzzles, statistics, signal construction, and how to drill flashcards the way top researchers do.

Quant research interviews are unlike any other finance round. There is no framework, no resume deep dive, no rapid-fire accounting — just probability problems, statistics, and signal-design questions that get harder until you stop being able to answer. Firms like Two Sigma, Citadel, D. E. Shaw, and Jane Street are deliberately filtering for raw problem-solving ability under cold conditions. This guide covers what the round actually tests, the question clusters to drill, and how to prepare with voice-based flashcards the way top researchers do.

What a quant research interview looks like

Most firms run three to five technical rounds during the onsite, each 45 to 60 minutes. The structure of a single round:

  1. Warm-up (5 min). A short probability puzzle or statistics question to get you into the rhythm. "You flip a fair coin 10 times — what's the expected number of runs?"
  2. Core problems (35–45 min). Two to four harder problems, usually escalating. Expect probability, combinatorics, regression, or signal-related questions. The interviewer expects narration, cleanliness of thought, and rigor.
  3. Follow-ups (5–10 min). "Now the coin is biased" or "Now there is correlation between flips." You are expected to adapt.

Some firms (Jane Street, some Two Sigma pods) mix in brain-teasers and market-making games. Others (D. E. Shaw, Citadel's statistical arbitrage teams) stay closer to applied statistics and regression. The underlying skill is the same — quantitative reasoning under live pressure.

What interviewers actually score

  • Mathematical maturity. Do you know what tools apply — when to use generating functions vs. recursion vs. symmetry? Can you set up a problem correctly?
  • Rigor. Do your steps actually follow? When you make a simplifying assumption, do you name it and justify it?
  • Intuition. Beyond the manipulation, do you understand why the answer is what it is? Do you sanity-check with limiting cases?
  • Communication. Can you walk through your reasoning so the interviewer can follow without asking "wait, what are you computing?"
  • Recovery under correction. When the interviewer flags an error or pushes back, do you integrate and re-derive, or do you collapse?

The firms running these rounds are deliberately oversampling candidates. A strong researcher who misses one hard problem in a session still gets the offer. A candidate who hand-waves through an easy problem and cannot recover is out.

Question clusters

Probability

  • Dice, coins, urns. Expected value problems, conditional probability, Bayesian updates. The 90%+ of warm-ups.
  • Random walks and martingales. "Start at 0, flip until you hit +5 or -3 — what's the probability of hitting +5 first?" Optional stopping theorem territory.
  • Order statistics. "Draw n uniform $(0,1)$, what's the expected value of the $k$th smallest?" Beta distribution, combinatorial reasoning.
  • Geometric probability. "Three random points on a unit circle — what's the probability the origin is inside the triangle?" Symmetry-heavy.
  • Poisson processes and waiting times. Memoryless property, thinning, merging.

Statistics and regression

  • OLS and its variants. Derivation from first principles. What the assumptions are, what breaks when each is violated. Weighted least squares, ridge, lasso.
  • Bias / variance. Decomposition of mean squared error. When to trade one for the other.
  • Hypothesis testing. Type I vs. Type II, power, $p$-hacking pitfalls. Frequentist vs. Bayesian framing.
  • Time series. Stationarity, autocorrelation, AR/MA processes. Why naive OLS fails on time series and what to use instead.
  • Bootstrapping and resampling. When it works, when it does not.

Signal design and backtesting

  • What makes a signal. Ex-ante predictiveness, stability across regimes, low correlation with existing book.
  • Backtest pathologies. Look-ahead bias, survivorship bias, data snooping, overfitting. How to construct a clean backtest.
  • Portfolio construction basics. Sharpe, Sortino, drawdown. Why Sharpe alone is insufficient. How to size positions given a signal.

Brainteasers and puzzles

Less common at research-heavy shops but still appear at some desks. "100 prisoners and 100 boxes." "You are in a submarine trying to find an enemy moving on the integer line." These are testing structured thinking and creativity under cold conditions.

Preparation roadmap

  • Weeks 1–3: Rebuild probability and statistics. Work through a standard problem book (Sheldon Ross for probability, Casella and Berger for statistics). Target deep fluency on undergraduate material before touching anything harder.
  • Weeks 4–5: Problem drilling. Work through a quant interview problem book — the Heard on the Street style compilations. Target 4–6 problems per day, timed, with full written solutions.
  • Week 6: Applied regression and signal design. If the target firm is statistical arbitrage or systematic equity, spend a week on time-series regression, signal construction, and backtest pitfalls.
  • Week 7: Flashcard drilling. Convert problem clusters into flashcards. Target instant pattern recognition — within 30 seconds of reading a problem you should know whether it is a random walk, an order statistic, a Poisson process, or something else.
  • Week 8: Full mocks with voice. Three to five full mock sessions per day with follow-ups, under timing. Simulate the cold feel of a real superday.

The highest-leverage phase is week 7. Most candidates cannot produce answers under pressure even to problems they would solve in 30 seconds at home. Pattern recognition at speed is the differentiator.

How to practice with InterviewDen

The Quant Research track on InterviewDen drills flashcards from the canonical quant problem bank against a voice-driven interviewer. You hear the problem, think out loud, and answer. The AI grades your answer against the reference, flags missing steps, and asks follow-ups — "Now what if the coin is biased?" — the way a real researcher would.

The system tracks your weak clusters and re-surfaces them with spaced repetition. When you consistently miss Poisson processes, you see more Poisson processes. When you master a cluster, the system moves on. The debrief shows exactly where you hesitated and where you were rigorous.

Start a session from quant research practice — pick a cluster or let the system pick based on your history.

Common mistakes

  • Jumping to calculation without setting up. Strong candidates restate the problem and define notation before computing. Weak candidates start multiplying numbers.
  • Skipping the sanity check. Every answer needs a limiting case — what happens when $n \to \infty$? What happens when the probability goes to 0 or 1? Interviewers watch for this.
  • Covering for not knowing. If you have never seen martingale stopping theorems, saying "I don't have the tools for this but I can try a simulation-based estimate" is much better than bluffing.
  • Getting lost in algebra. If your work spirals into a page of symbols without a clean answer, step back and look for a combinatorial or symmetry-based shortcut.
  • Not asking clarifying questions. Ambiguous problems deserve clarification. "Are the draws with or without replacement?" is almost always a legitimate question.
  • Refusing to guess when asked. When the interviewer says "give me your best estimate, even if you are not sure," they mean it. Give a number with your reasoning and explicit uncertainty.

FAQ

Do I need a PhD to interview for quant research?

Many firms require one or a comparable track record. Two Sigma, Citadel GQS, D. E. Shaw's top teams, and most hedge fund research groups filter heavily on PhD or masters with strong research output. Some desks take strong undergraduates — especially Jane Street — but the bar is still PhD-adjacent.

How much programming do they test?

Some firms run a separate coding round (usually C++ or Python), often with a statistics angle like "implement a running correlation" or "simulate a random walk." Research rounds themselves are usually pen-and-paper.

What math background should I have?

Real analysis at the level of Rudin, measure-theoretic probability, linear algebra, and applied statistics. Stochastic calculus helps for derivatives-heavy desks but is not required for most systematic equity or quant trading roles.

How are quant research interviews different from data science interviews?

Quant research leans harder on probability theory and less on machine learning breadth. Data science interviews test SQL, basic modeling, and business framing. Quant research tests raw mathematical maturity.

Can I switch from engineering to quant research?

Yes, but it takes dedicated retooling — typically a year of self-study or a masters program. Engineering backgrounds help for the coding rounds and for infrastructure-adjacent research roles.

Are the problems really as hard as the rumors?

They are harder than you expect, but the bar on any single problem is usually lower than legend suggests. Most rounds have one or two problems you are expected to partially solve. Partial credit with clean reasoning often beats full solutions delivered messily.

What about Jane Street specifically?

Jane Street runs a distinctive interview process heavy on market-making and mental math games. See the quant trading guide for those specifics — for quant research roles there, expect standard probability and statistics with some applied signal discussion.

Related roadmaps