Skip to main content

Documentation Index

Fetch the complete documentation index at: https://snakysec.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

ADR-004 — Cursor-based pagination over skip/take for high-volume tables

Date: 2025-03-30 Status: Accepted Deciders: Nicolas (founder)

Context

The platform exposes paginated list endpoints for alerts and audit-log. These tables grow unboundedly (every audit generates alerts; every admin action creates an audit log entry). PostgreSQL OFFSET (skip/take) degrades to O(n) — a client with 10,000 alerts would trigger a full index scan to serve page 200.

Decision

Cursor-based pagination using the record id (cuid) as cursor for alerts and audit-log endpoints.

Implementation

lib/pagination.ts provides three pure functions:
  • parseCursorParams(searchParams){ cursor, limit } — parses ?cursor=<id>&limit=<n>, enforces 1–200 range
  • cursorArgs(cursor, limit) → Prisma { take, cursor, skip } fragment
  • buildCursorPage(items, limit){ data, nextCursor } — slices to limit, returns null nextCursor when no more pages
Response contract: { data: T[], nextCursor: string | null }

Consequences

  • Breaking change for consumers of /api/v1/alerts and /api/v1/audit-log: skip/take/total removed; nextCursor added
  • Frontend must implement infinite scroll or “load more” pattern (not traditional numbered pages)
  • Count queries removed — total is no longer returned (O(n) COUNT avoided)
  • Tables with small bounded record counts (clients, findings) keep skip/take — premature optimization avoided

Why not keyset pagination on (createdAt, id)?

createdAt is not unique — multiple records can share the same timestamp at millisecond precision. Using id as cursor with Prisma’s native cursor support (which tracks position in the ordered sequence) is simpler and correct without the composite key complexity.

Alternatives rejected

  • Relay-style (opaque base64 cursors): More frontend-friendly but adds encoding layer with no functional benefit for internal API
  • Page tokens (encrypted offset): Prevents scraping but doesn’t solve the O(n) DB problem