Conversation
- New tmdb_cache table (Supabase migration) - fetchTmdbData checks cache first, returns cached data if <30 days old - Cache key: IMDB ID or SHA256 of cleaned title - Misses cached too (avoids repeated lookups for non-existent content) - Cache writes are non-blocking and failure-tolerant - Extracted cleanTitleForSearch as reusable function Saves 2-3 TMDB API calls per page view for previously-seen content.
Addresses Copilot review: 1. RLS policy now restricts writes to service_role only 2. getSupabaseClient() requires SUPABASE_SERVICE_ROLE_KEY (no anon key fallback)
- Check memory every 15s instead of 30s - killOldestStreams now actually kills unwatched streams (was a no-op) - Streams with active watchers (users watching) are always protected - Reduce torrent min-age protection from 30min to 5min under critical/severe pressure - Lower default maxConcurrentStreams from 10 to 4 - Pass pressure level to aggressiveCleanup for smarter eviction - Fix systemd override to properly quote NODE_OPTIONS with --expose-gc
There was a problem hiding this comment.
Pull request overview
Adds a Supabase-backed cache for TMDB enrichment results and tightens streaming memory-pressure behavior to reduce repeated external calls and mitigate OOM scenarios.
Changes:
- Introduce
tmdb_datatable (with RLS policies) to persist cached TMDB metadata keyed by IMDB ID or hashed title. - Add Supabase-based read/write caching to
fetchTmdbData(including caching misses) and refactor title-cleaning into a helper. - Adjust streaming service defaults and memory-pressure cleanup behavior (shorter checks, lower default concurrency, more aggressive cleanup under critical/severe pressure).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 12 comments.
| File | Description |
|---|---|
| supabase/migrations/20260303030000_tmdb_data.sql | Creates the TMDB cache table, index, and RLS policies for shared cached metadata. |
| src/lib/imdb/tmdb.ts | Implements cache keying + Supabase read/write caching around TMDB enrichment fetches. |
| src/lib/streaming/streaming.ts | Updates memory-pressure mitigation (intervals, defaults) and stream/torrent cleanup behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
|
|
||
| // Sort by creation time (oldest first) and kill up to `count` | ||
| const toKill = unwatchedStreams.slice(0, count); |
There was a problem hiding this comment.
killOldestStreams says it will sort by creation time, but unwatchedStreams.slice(0, count) uses insertion order without sorting. Since ActiveStream has createdAt, sort unwatchedStreams by stream.createdAt (or by torrentAddedAt) before slicing so the “oldest” streams are actually targeted.
| const toKill = unwatchedStreams.slice(0, count); | |
| const sortedUnwatched = unwatchedStreams.slice().sort((a, b) => { | |
| const aCreated = a[1].createdAt; | |
| const bCreated = b[1].createdAt; | |
| if (aCreated == null || bCreated == null) { | |
| // If creation time is missing, keep original relative order | |
| return 0; | |
| } | |
| if (aCreated < bCreated) return -1; | |
| if (aCreated > bCreated) return 1; | |
| return 0; | |
| }); | |
| const toKill = sortedUnwatched.slice(0, count); |
| } | ||
| ); | ||
| } | ||
| this.activeStreams.delete(id); |
There was a problem hiding this comment.
Before deleting this.torrentWatchers for an infohash, clear any pending cleanupTimer on that entry. Otherwise the scheduled cleanup callback will still run later with stale state (and can log misleading messages / do redundant work).
| this.activeStreams.delete(id); | |
| this.activeStreams.delete(id); | |
| const watcherInfo = this.torrentWatchers.get(stream.infohash); | |
| if (watcherInfo?.cleanupTimer) { | |
| clearTimeout(watcherInfo.cleanupTimer); | |
| // Avoid holding onto a stale timeout reference | |
| (watcherInfo as { cleanupTimer?: NodeJS.Timeout | undefined }).cleanupTimer = undefined; | |
| } |
| logger.warn('killOldestStreams completed', { | ||
| killed: toKill.length, | ||
| skippedWatched: unwatchedStreams.length - toKill.length + (this.activeStreams.size - unwatchedStreams.length), | ||
| remaining: this.activeStreams.size, |
There was a problem hiding this comment.
skippedWatched is computed after mutating this.activeStreams (you delete entries in the loop). Because unwatchedStreams.length was captured before deletions but this.activeStreams.size is post-deletion, this value can be wrong/negative. Compute watchedCount / unwatchedCount and killedCount before deleting, then log those stable counts.
| this.maxConcurrentStreams = options.maxConcurrentStreams ?? 4; | ||
| this.streamTimeout = options.streamTimeout ?? 120000; |
There was a problem hiding this comment.
The constructor default for maxConcurrentStreams was changed to 4, but the StreamingServiceOptions JSDoc still documents a different default value. Please update the documentation (or align the default) so callers and tests don’t rely on outdated defaults.
| created_at timestamptz NOT NULL DEFAULT now(), | ||
| updated_at timestamptz NOT NULL DEFAULT now() | ||
| ); | ||
|
|
There was a problem hiding this comment.
updated_at is indexed for stale-entry cleanup, but this migration doesn’t add a trigger to keep updated_at current on updates/upserts. Other tables in this repo use an update_updated_at_column() trigger; consider adding a BEFORE UPDATE trigger for tmdb_data so upserts refresh updated_at and cleanup logic can rely on it.
| -- Keep updated_at current on updates/upserts | |
| CREATE TRIGGER set_tmdb_data_updated_at | |
| BEFORE UPDATE ON tmdb_data | |
| FOR EACH ROW | |
| EXECUTE FUNCTION update_updated_at_column(); |
| tagline: data.tagline, | ||
| cast_names: data.cast, | ||
| writers: data.writers, | ||
| content_rating: data.contentRating, |
There was a problem hiding this comment.
setCache upserts rows but doesn’t set updated_at. Without a DB trigger, repeated refreshes won’t bump updated_at, which undermines the “stale entry cleanup” index/purpose. Either add a BEFORE UPDATE trigger on tmdb_data (preferred) or include updated_at: now()/new Date().toISOString() in the upsert payload for conflict updates.
| content_rating: data.contentRating, | |
| content_rating: data.contentRating, | |
| updated_at: new Date().toISOString(), |
| // Destroy the stream's torrent | ||
| const torrent = (this.client?.torrents ?? []).find(t => t.infoHash === stream.infohash); | ||
| if (torrent) { | ||
| (torrent.destroy as (opts: { destroyStore: boolean }, callback?: (err: Error | null) => void) => void)( | ||
| { destroyStore: true }, | ||
| () => { | ||
| this.deleteTorrentFolder(torrent.name, stream.infohash).catch(() => {}); | ||
| } | ||
| ); | ||
| } | ||
| this.activeStreams.delete(id); | ||
| this.torrentWatchers.delete(stream.infohash); | ||
| this.torrentAddedAt.delete(stream.infohash); | ||
| } |
There was a problem hiding this comment.
This logic destroys the underlying torrent for a single ActiveStream (torrent.destroy(...)), but multiple activeStreams can share the same infohash (e.g., different files from the same torrent). Destroying the torrent here will break those other streams, and the code doesn’t remove/close the other activeStreams entries for that infohash. Consider selecting victims at the torrent level (by infohash) and, when destroying a torrent, also destroy/remove all streams that reference that infohash (including calling activeStream.stream.destroy()).
| -- Public read/write (no user-scoping needed, this is shared cache) | ||
| CREATE POLICY "tmdb_data_public_read" ON tmdb_data FOR SELECT USING (true); | ||
| CREATE POLICY "tmdb_data_service_write" ON tmdb_data | ||
| FOR ALL | ||
| USING (auth.role() = 'service_role') | ||
| WITH CHECK (auth.role() = 'service_role'); |
There was a problem hiding this comment.
The comment says “Public read/write”, but the policies only allow public read; writes are restricted to service_role. Please update the comment to match the actual policy intent (or adjust the policies if public write was intended).
|
|
||
| function getCacheKey(imdbId: string, titleHint?: string): string { | ||
| if (imdbId) return imdbId; | ||
| if (titleHint) return 'title:' + createHash('sha256').update(titleHint.toLowerCase().trim()).digest('hex').slice(0, 32); |
There was a problem hiding this comment.
For title-based lookups, getCacheKey hashes the raw titleHint, while the TMDB search uses a cleaned title (cleanTitleForSearch). This will fragment the cache (same content gets multiple keys) and can grow the table quickly. Consider hashing cleanTitleForSearch(titleHint) (and/or including year) so equivalent titles map to the same cache key.
| if (titleHint) return 'title:' + createHash('sha256').update(titleHint.toLowerCase().trim()).digest('hex').slice(0, 32); | |
| if (titleHint) { | |
| const cleanedTitle = cleanTitleForSearch(titleHint).toLowerCase().trim(); | |
| return 'title:' + createHash('sha256').update(cleanedTitle).digest('hex').slice(0, 32); | |
| } |
| private killOldestStreams(count: number): void { | ||
| // NEVER kill active streams. Users are watching these — killing them mid-playback | ||
| // is a terrible UX. If memory is truly critical, systemd's MemoryMax will handle it. | ||
| // Emergency/aggressive cleanup already removes idle torrents without active watchers. | ||
| logger.warn('killOldestStreams called but SKIPPING — active streams are protected', { | ||
| requestedKill: count, | ||
| activeStreams: this.activeStreams.size, | ||
| // Only kill streams that have NO active watchers (nobody is watching). | ||
| // Active watchers = someone has an SSE connection open for this torrent. | ||
| // This protects users mid-playback while still freeing unwatched resources. | ||
| const unwatchedStreams: Array<[string, ActiveStream]> = []; |
There was a problem hiding this comment.
killOldestStreams behavior changed materially (it now terminates streams/torrents under memory pressure). Since this file already has extensive unit tests, please add focused tests covering these memory-pressure cleanup paths (e.g., watched torrents aren’t killed, unwatched victims are chosen deterministically, and destroying a torrent cleans up all streams for that infohash).
|
Addressed all 12 Copilot review comments in commit bf4ed6b: streaming.ts:
tmdb.ts:
migration:
|
Description
Type of Change
TDD Checklist (MANDATORY)
All items must be checked before merge:
pnpm test)anytypes used (unless explicitly justified)pnpm tsc --noEmit)pnpm lint)Security Checklist
Testing
Test Coverage
How to Test
Screenshots (if applicable)
Related Issues
Closes #
Additional Notes