The coding interview at most companies tests roughly a dozen recurring patterns, not 200 unique problem types. Once you can recognize the pattern from the problem statement and the constraints, you've done 60% of the work — the remaining 40% is producing clean code, narrating your reasoning, and handling follow-ups.
Below are 12 patterns that cover almost every coding interview question you'll see at FAANG, hedge funds, and quant trading firms. Each section gives a representative problem, a worked solution, complexity analysis, and the follow-ups interviewers actually push on.
If you'd rather drill these against an AI interviewer that asks clarifying questions and pushes back on shaky answers, run a coding mock — the mock generates fresh problems and grades your reasoning, not just your code.
1. Two pointers
Representative problem. Given a sorted array of integers and a target sum, return any pair of indices whose values sum to the target. O(n) time, O(1) extra space.
Pattern recognition. Sorted input + pair-finding + O(1) space hint = two pointers. The array being sorted is the load-bearing constraint; without it you'd need a hash set.
Solution.
def two_sum_sorted(nums: list[int], target: int) -> tuple[int, int] | None:
lo, hi = 0, len(nums) - 1
while lo < hi:
s = nums[lo] + nums[hi]
if s == target:
return lo, hi
if s < target:
lo += 1
else:
hi -= 1
return None
Complexity. O(n) time, O(1) extra space.
Follow-ups interviewers push. What if the array isn't sorted (use a hash set; O(n) time, O(n) space — the trade-off is the point of the question)? What if there are duplicates and you need all pairs (skip duplicates after a hit)? What if it's a stream and you can't materialize the array (sliding-window or count-min sketch)?
2. Sliding window
Representative problem. Given a string, return the length of the longest substring without repeating characters.
Pattern recognition. "Longest / shortest / count of contiguous substrings or subarrays satisfying X" → sliding window. The window expands while the condition holds and shrinks when it breaks.
Solution.
def longest_unique_substring(s: str) -> int:
seen = {} # char -> last index seen
start = 0 # left edge of current window
best = 0
for i, c in enumerate(s):
if c in seen and seen[c] >= start:
start = seen[c] + 1 # shrink past the previous occurrence
seen[c] = i
best = max(best, i - start + 1)
return best
Complexity. O(n) time, O(min(n, alphabet)) space.
Follow-ups. Return the substring itself, not just the length. Generalize to "longest substring with at most k distinct characters" (still sliding window; track a count of distinct chars). Stream variant where you can't go back — same algorithm, just emit best continuously.
3. Hash map / counting
Representative problem. Given an array, return all elements that appear more than n/3 times.
Pattern recognition. Counting and filtering — the canonical hash-map workhorse. Almost every "find duplicates / find frequencies / group by" problem reduces to this.
Solution.
from collections import Counter
def majority_elements(nums: list[int]) -> list[int]:
n = len(nums)
counts = Counter(nums)
return [x for x, c in counts.items() if c > n // 3]
Complexity. O(n) time, O(n) space.
Follow-ups. Do it in O(1) space (Boyer-Moore voting — interviewers love this; there are at most two such elements, so you only need two candidate slots). What if the array is sorted (single pass with run-length counting)? What if the array doesn't fit in memory (count-min sketch with bounded error)?
4. Binary search
Representative problem. Given a sorted, rotated array (e.g. [4,5,6,7,0,1,2]) and a target, return its index in O(log n).
Pattern recognition. Sorted (or partially sorted) input + better-than-linear time requirement = binary search. The trick with the rotated variant is that one of the two halves is always sorted — you binary-search by deciding which side of the pivot the target lives on.
Solution.
def search_rotated(nums: list[int], target: int) -> int:
lo, hi = 0, len(nums) - 1
while lo <= hi:
mid = (lo + hi) // 2
if nums[mid] == target:
return mid
if nums[lo] <= nums[mid]: # left half is sorted
if nums[lo] <= target < nums[mid]:
hi = mid - 1
else:
lo = mid + 1
else: # right half is sorted
if nums[mid] < target <= nums[hi]:
lo = mid + 1
else:
hi = mid - 1
return -1
Complexity. O(log n) time, O(1) space.
Follow-ups. Handle duplicates (nums[lo] == nums[mid] is ambiguous; you have to fall back to linear in that branch). Find the pivot index itself. Adapt to "find smallest element ≥ k" — that's a generic lower-bound binary search, the most useful tool in your kit.
5. BFS
Representative problem. Given a 2D grid of 0s (water) and 1s (land), return the shortest distance between two specific land cells, or -1 if disconnected. You can move up/down/left/right.
Pattern recognition. "Shortest path in unweighted graph / grid" → BFS. The breadth-first guarantee is what makes the first time you reach the target the shortest path.
Solution.
from collections import deque
def shortest_path(grid: list[list[int]], start: tuple[int, int],
end: tuple[int, int]) -> int:
rows, cols = len(grid), len(grid[0])
if grid[start[0]][start[1]] == 0 or grid[end[0]][end[1]] == 0:
return -1
queue = deque([(start, 0)])
seen = {start}
while queue:
(r, c), d = queue.popleft()
if (r, c) == end:
return d
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = r + dr, c + dc
if (0 <= nr < rows and 0 <= nc < cols
and grid[nr][nc] == 1 and (nr, nc) not in seen):
seen.add((nr, nc))
queue.append(((nr, nc), d + 1))
return -1
Complexity. O(rows × cols) time and space.
Follow-ups. Multi-source BFS (start with multiple cells in the queue, distance 0; useful for "rotting oranges" and similar). Bidirectional BFS (shrinks search space when start and end are far apart). Diagonal moves (8 neighbors instead of 4).
6. DFS
Representative problem. Given a 2D grid of 0s and 1s, count the number of connected islands.
Pattern recognition. "Count connected components" or "explore all reachable cells from each unvisited start" → DFS. Easier to write recursively than BFS for this kind of problem.
Solution.
def count_islands(grid: list[list[int]]) -> int:
if not grid:
return 0
rows, cols = len(grid), len(grid[0])
seen = [[False] * cols for _ in range(rows)]
def dfs(r: int, c: int) -> None:
if (r < 0 or r >= rows or c < 0 or c >= cols
or seen[r][c] or grid[r][c] == 0):
return
seen[r][c] = True
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
dfs(r + dr, c + dc)
count = 0
for r in range(rows):
for c in range(cols):
if grid[r][c] == 1 and not seen[r][c]:
dfs(r, c)
count += 1
return count
Complexity. O(rows × cols) time and space.
Follow-ups. Convert to iterative DFS to avoid stack overflow on huge grids (use an explicit stack). Find the largest island instead of counting (return the size from the recursion). Modify the grid in place to save the seen array — common interviewer ask, watch for the "what if the input is read-only" follow-up.
7. Dynamic programming (1D)
Representative problem. Given an array of stock prices indexed by day, find the maximum profit from one buy and one sell. You must buy before you sell.
Pattern recognition. "Optimum over a sequence with overlapping subproblems" → DP. The 1D case usually reduces to "track the best answer seen so far in one pass."
Solution.
def max_profit(prices: list[int]) -> int:
best, lowest = 0, float('inf')
for p in prices:
lowest = min(lowest, p)
best = max(best, p - lowest)
return best
Complexity. O(n) time, O(1) space.
Follow-ups. Two transactions instead of one (track best/lowest at four states: bought1, sold1, bought2, sold2). Unlimited transactions (any time price goes up, take the gain). With a fee per trade. With a cooldown after each sale. The state machine generalizes once you see it.
8. Dynamic programming (2D)
Representative problem. Given two strings, return their longest common subsequence's length.
Pattern recognition. "Optimum across two sequences" or "edit-distance-shaped problem" → 2D DP. The grid lets you express "best answer using the first i characters of one and the first j of the other."
Solution.
def lcs_length(a: str, b: str) -> int:
m, n = len(a), len(b)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if a[i - 1] == b[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[m][n]
Complexity. O(m × n) time and space.
Follow-ups. Reconstruct the actual subsequence (backtrack from dp[m][n] to the origin). Drop space to O(min(m, n)) using a single row. Adapt to longest common substring (not subsequence — different recurrence). Edit distance (insert/delete/replace each cost 1).
9. Heap / priority queue
Representative problem. Given a stream of integers, return the median after each new value.
Pattern recognition. "Top-k / running statistic over a stream" → heap. For the median specifically, two heaps (max-heap of the lower half, min-heap of the upper half) give O(log n) inserts and O(1) median.
Solution.
import heapq
class MedianStream:
def __init__(self) -> None:
self.lo: list[int] = [] # max-heap (store negatives)
self.hi: list[int] = [] # min-heap
def add(self, x: int) -> None:
if not self.lo or x <= -self.lo[0]:
heapq.heappush(self.lo, -x)
else:
heapq.heappush(self.hi, x)
# Rebalance so |lo| - |hi| in {0, 1}
if len(self.lo) > len(self.hi) + 1:
heapq.heappush(self.hi, -heapq.heappop(self.lo))
elif len(self.hi) > len(self.lo):
heapq.heappush(self.lo, -heapq.heappop(self.hi))
def median(self) -> float:
if len(self.lo) > len(self.hi):
return float(-self.lo[0])
return (-self.lo[0] + self.hi[0]) / 2
Complexity. O(log n) per insert, O(1) per median, O(n) space.
Follow-ups. Top-k largest in a stream (single min-heap of size k). K-way merge of sorted lists (heap of (value, list-index, item-index)). Median over a sliding window of fixed size — much harder; the textbook answer involves two indexed multisets, but interviewers will accept a heap with lazy deletion and a complexity argument.
10. Backtracking
Representative problem. Given a string of digits, return all possible letter combinations the digits could represent on a phone keypad.
Pattern recognition. "Generate all valid combinations / permutations / subsets satisfying X" → backtracking. The depth of the recursion tree is the length of the input; the branching factor is the choices at each step.
Solution.
def letter_combinations(digits: str) -> list[str]:
if not digits:
return []
pad = {
'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl',
'6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz',
}
out: list[str] = []
def backtrack(i: int, path: list[str]) -> None:
if i == len(digits):
out.append(''.join(path))
return
for c in pad[digits[i]]:
path.append(c)
backtrack(i + 1, path)
path.pop()
backtrack(0, [])
return out
Complexity. O(4^n × n) time (worst case is digits that map to 4 letters), O(n) recursion depth.
Follow-ups. N-queens (place N queens on an NxN board with no attacks). Sudoku solver. Generate all subsets of a set (the canonical pattern). Generate all permutations. Each variant changes the choice and the constraint, but the structure stays the same.
11. Graph algorithms
Representative problem. Given a list of courses and prerequisites, return any valid order in which to take them, or empty if there's a cycle.
Pattern recognition. "Schedule / order tasks with dependencies" → topological sort. Either Kahn's algorithm (BFS using in-degree) or DFS post-order.
Solution (Kahn's).
from collections import defaultdict, deque
def course_order(num_courses: int, prereqs: list[list[int]]) -> list[int]:
graph: dict[int, list[int]] = defaultdict(list)
in_degree = [0] * num_courses
for course, pre in prereqs:
graph[pre].append(course)
in_degree[course] += 1
queue = deque(i for i, d in enumerate(in_degree) if d == 0)
order: list[int] = []
while queue:
node = queue.popleft()
order.append(node)
for nb in graph[node]:
in_degree[nb] -= 1
if in_degree[nb] == 0:
queue.append(nb)
return order if len(order) == num_courses else []
Complexity. O(V + E) time and space.
Follow-ups. Detect a cycle even without ordering (DFS with recursion stack tracking). Shortest path in a DAG (topological sort + relax in order). Find all valid orderings (backtracking on top of topo-sort). Dijkstra (weighted, non-negative edges). Bellman-Ford (negative weights). Union-Find (connectivity, MST).
12. Tries
Representative problem. Implement a data structure with insert(word), search(word), and startsWith(prefix) operations.
Pattern recognition. "Lots of strings + prefix queries" → trie. Hash sets handle exact match in O(1); tries handle prefix match in O(prefix length) and shine when you have many lookups against a fixed dictionary.
Solution.
class TrieNode:
__slots__ = ('children', 'end')
def __init__(self) -> None:
self.children: dict[str, 'TrieNode'] = {}
self.end = False
class Trie:
def __init__(self) -> None:
self.root = TrieNode()
def insert(self, word: str) -> None:
node = self.root
for c in word:
node = node.children.setdefault(c, TrieNode())
node.end = True
def _walk(self, prefix: str) -> TrieNode | None:
node = self.root
for c in prefix:
if c not in node.children:
return None
node = node.children[c]
return node
def search(self, word: str) -> bool:
node = self._walk(word)
return node is not None and node.end
def starts_with(self, prefix: str) -> bool:
return self._walk(prefix) is not None
Complexity. O(L) per operation where L is the word length; O(total characters) space.
Follow-ups. Wildcard search (regex-like . matches any char — recurse over all children at the wildcard position). Autocomplete with top-k frequencies (each node stores a min-heap of best completions). Replace prefixes in a sentence (the canonical "replace words" problem). Memory-optimize by collapsing single-child chains (radix tree).
FAQ
Should I memorize these solutions?
No. Memorize the pattern recognition — what about the problem statement tells you which tool applies. The actual code falls out once you've picked the tool.
What's the right ratio of practice to theory?
Roughly 70/30 in favor of practice. Reading about these patterns is necessary but never sufficient. You build the recognition reflex by attempting problems, getting stuck, looking at the solution, then doing a similar problem cold the next day.
How many problems do I need to solve before I'm "ready"?
Less than people think if the practice is deliberate. About 80–120 problems across these 12 patterns is enough for most candidates, provided you're solving them under time pressure (45 minutes max), narrating out loud, and reviewing the solutions you couldn't get. 500+ problems with no narration is worse preparation than 100 with it.
Why does Python show up so often in interviews?
Because clean Python is closer to pseudocode than any other mainstream language, and interview problems mostly test reasoning, not language knowledge. Use whatever language you're most fluent in, but if you're choosing fresh, Python is the safest pick at almost every firm.
What if the interviewer asks me a problem I've never seen?
Tell them. "I haven't seen this exact problem, so let me think out loud." Then start with a brute-force solution, state its complexity, and look for the structure that lets you do better. Interviewers grade reasoning, not recall.