Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
169 changes: 127 additions & 42 deletions src/lib/imdb/tmdb.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,13 @@
* TMDB Enrichment — fetches rich metadata from TMDB using an IMDB ID.
* Makes 2-3 API calls: /find (IMDB→TMDB) + /movie or /tv (details+credits).
* Falls back to /search if /find returns nothing (common for obscure IMDB entries).
*
* Results are cached in the tmdb_data table to avoid repeated API calls.
*/

import { createClient } from '@supabase/supabase-js';
import { createHash } from 'crypto';

export interface TmdbData {
posterUrl: string | null;
backdropUrl: string | null;
Expand All @@ -20,10 +25,98 @@ const EMPTY: TmdbData = {
cast: null, writers: null, contentRating: null, tmdbId: null,
};

function getCacheKey(imdbId: string, titleHint?: string): string {
if (imdbId) return imdbId;
if (titleHint) return 'title:' + createHash('sha256').update(titleHint.toLowerCase().trim()).digest('hex').slice(0, 32);
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For title-based lookups, getCacheKey hashes the raw titleHint, while the TMDB search uses a cleaned title (cleanTitleForSearch). This will fragment the cache (same content gets multiple keys) and can grow the table quickly. Consider hashing cleanTitleForSearch(titleHint) (and/or including year) so equivalent titles map to the same cache key.

Suggested change
if (titleHint) return 'title:' + createHash('sha256').update(titleHint.toLowerCase().trim()).digest('hex').slice(0, 32);
if (titleHint) {
const cleanedTitle = cleanTitleForSearch(titleHint).toLowerCase().trim();
return 'title:' + createHash('sha256').update(cleanedTitle).digest('hex').slice(0, 32);
}

Copilot uses AI. Check for mistakes.
return '';
}

function getSupabaseClient() {
const url = process.env.SUPABASE_URL || process.env.NEXT_PUBLIC_SUPABASE_URL;
const key = process.env.SUPABASE_SERVICE_ROLE_KEY;
if (!url || !key) return null;
return createClient(url, key);
}

async function getCached(key: string): Promise<TmdbData | null> {
if (!key) return null;
try {
const supabase = getSupabaseClient();
if (!supabase) return null;
const { data } = await supabase
.from('tmdb_data')
.select('*')
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getCached uses .select('*') even though only a subset of columns are read. Selecting only the needed columns (poster_url, backdrop_url, overview, tagline, cast_names, writers, content_rating, tmdb_id) reduces payload size and improves cache lookup efficiency.

Suggested change
.select('*')
.select('poster_url, backdrop_url, overview, tagline, cast_names, writers, content_rating, tmdb_id')

Copilot uses AI. Check for mistakes.
.eq('lookup_key', key)
.single();
if (!data) return null;

return {
posterUrl: data.poster_url,
backdropUrl: data.backdrop_url,
overview: data.overview,
tagline: data.tagline,
cast: data.cast_names,
writers: data.writers,
contentRating: data.content_rating,
tmdbId: data.tmdb_id,
};
} catch {
return null;
}
}

async function setCache(key: string, data: TmdbData): Promise<void> {
if (!key) return;
try {
const supabase = getSupabaseClient();
if (!supabase) return;
await supabase
.from('tmdb_data')
.upsert({
lookup_key: key,
tmdb_id: data.tmdbId,
poster_url: data.posterUrl,
backdrop_url: data.backdropUrl,
overview: data.overview,
tagline: data.tagline,
cast_names: data.cast,
writers: data.writers,
content_rating: data.contentRating,
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setCache upserts rows but doesn’t set updated_at. Without a DB trigger, repeated refreshes won’t bump updated_at, which undermines the “stale entry cleanup” index/purpose. Either add a BEFORE UPDATE trigger on tmdb_data (preferred) or include updated_at: now()/new Date().toISOString() in the upsert payload for conflict updates.

Suggested change
content_rating: data.contentRating,
content_rating: data.contentRating,
updated_at: new Date().toISOString(),

Copilot uses AI. Check for mistakes.
}, { onConflict: 'lookup_key' });
} catch {
// Cache write failure is non-critical
}
}

function cleanTitleForSearch(titleHint: string): string {
let cleanTitle = titleHint
.replace(/\.\w{2,4}$/, '')
.replace(/\[[^\]]*\]/g, '')
.replace(/\([^)]*\)/g, '')
.replace(/^(www\.)?[a-z0-9_-]+\.(org|com|net|io|tv|cc|to|bargains|club|xyz|me)\s*[-\u2013\u2014]\s*/i, '')
.replace(/[._]/g, ' ')
.replace(/(S\d{1,2}E\d{1,2}).*$/i, '')
.replace(/\b(1080p|720p|2160p|4k|480p|bluray|blu-ray|brrip|bdrip|dvdrip|webrip|web-?dl|webdl|hdtv|hdrip|x264|x265|hevc|avc|aac[0-9. ]*|ac3|dts|flac|mp3|remux|uhd|uhdr|hdr|hdr10|dv|dolby|vision|10bit|8bit|repack|proper|extended|unrated|dubbed|subbed|multi|dual|audio|subs|h264|h265)\b/gi, '')
.replace(/\b(HQ|HDRip|ESub|HDCAM|CAM|DVDScr|PDTV|TS|TC|SCR)\b/gi, '')
.replace(/\b(Malayalam|Tamil|Telugu|Hindi|Kannada|Bengali|Marathi|Punjabi|Gujarati|English|Spanish|French|German|Italian|Korean|Japanese|Chinese|Russian|Arabic|Turkish|Hungarian|Polish|Dutch|Portuguese|Ukrainian|Czech)\b/gi, '')
.replace(/\b\d+(\.\d+)?\s*(MB|GB|TB)\b/gi, '')
.replace(/\s*[-\u2013]\s*[A-Za-z0-9]{2,15}\s*$/, '')
.replace(/(19|20)\d{2}.*$/, '')
.replace(/\s+/g, ' ')
.trim();
if (cleanTitle.length < 2) cleanTitle = titleHint;
return cleanTitle;
}

export async function fetchTmdbData(imdbId: string, titleHint?: string): Promise<TmdbData> {
const tmdbKey = process.env.TMDB_API_KEY;
if (!tmdbKey) return EMPTY;
if (!imdbId && !titleHint) return EMPTY;
if (!imdbId && !titleHint) return EMPTY;

// Check cache first
const cacheKey = getCacheKey(imdbId, titleHint);
const cached = await getCached(cacheKey);
if (cached) return cached;

try {
let tmdbId: number | null = null;
Expand All @@ -32,50 +125,33 @@ export async function fetchTmdbData(imdbId: string, titleHint?: string): Promise
let backdropUrl: string | null = null;
let overview: string | null = null;

// Step 1: Find TMDB ID from IMDB ID (skip if no imdbId)
// Step 1: Find TMDB ID from IMDB ID
if (imdbId) {
const findRes = await fetch(
`https://api.themoviedb.org/3/find/${imdbId}?api_key=${tmdbKey}&external_source=imdb_id`
);
if (findRes.ok) {
const findData = await findRes.json() as any;
const movieResult = findData.movie_results?.[0];
const tvResult = findData.tv_results?.[0];
const result = movieResult || tvResult;

if (result) {
tmdbId = result.id;
isTV = !movieResult && !!tvResult;
posterUrl = result.poster_path
? `https://image.tmdb.org/t/p/w500${result.poster_path}` : null;
backdropUrl = result.backdrop_path
? `https://image.tmdb.org/t/p/w1280${result.backdrop_path}` : null;
overview = result.overview || null;
const findRes = await fetch(
`https://api.themoviedb.org/3/find/${imdbId}?api_key=${tmdbKey}&external_source=imdb_id`
);
if (findRes.ok) {
const findData = await findRes.json() as any;
const movieResult = findData.movie_results?.[0];
const tvResult = findData.tv_results?.[0];
const result = movieResult || tvResult;

if (result) {
tmdbId = result.id;
isTV = !movieResult && !!tvResult;
posterUrl = result.poster_path
? `https://image.tmdb.org/t/p/w500${result.poster_path}` : null;
backdropUrl = result.backdrop_path
? `https://image.tmdb.org/t/p/w1280${result.backdrop_path}` : null;
overview = result.overview || null;
}
}
}
}

// Step 1b: Fallback — search TMDB by title if /find returned nothing
// Step 1b: Fallback — search TMDB by title
if (!tmdbId && titleHint) {
// Clean the title: strip codecs, quality, brackets, file extensions, season/episode info
let cleanTitle = titleHint
.replace(/\.\w{2,4}$/, '')
.replace(/\[[^\]]*\]/g, '')
.replace(/\([^)]*\)/g, '')
.replace(/^(www\.)?[a-z0-9_-]+\.(org|com|net|io|tv|cc|to|bargains|club|xyz|me)\s*[-\u2013\u2014]\s*/i, '')
.replace(/[._]/g, ' ')
.replace(/(S\d{1,2}E\d{1,2}).*$/i, '')
.replace(/\b(1080p|720p|2160p|4k|480p|bluray|blu-ray|brrip|bdrip|dvdrip|webrip|web-?dl|webdl|hdtv|hdrip|x264|x265|hevc|avc|aac[0-9. ]*|ac3|dts|flac|mp3|remux|uhd|uhdr|hdr|hdr10|dv|dolby|vision|10bit|8bit|repack|proper|extended|unrated|dubbed|subbed|multi|dual|audio|subs|h264|h265)\b/gi, '')
.replace(/\b(HQ|HDRip|ESub|HDCAM|CAM|DVDScr|PDTV|TS|TC|SCR)\b/gi, '')
.replace(/\b(Malayalam|Tamil|Telugu|Hindi|Kannada|Bengali|Marathi|Punjabi|Gujarati|English|Spanish|French|German|Italian|Korean|Japanese|Chinese|Russian|Arabic|Turkish|Hungarian|Polish|Dutch|Portuguese|Ukrainian|Czech)\b/gi, '')
.replace(/\b\d+(\.\d+)?\s*(MB|GB|TB)\b/gi, '')
.replace(/\s*[-\u2013]\s*[A-Za-z0-9]{2,15}\s*$/, '')
.replace(/(19|20)\d{2}.*$/, '')
.replace(/\s+/g, ' ')
.trim();
if (cleanTitle.length < 2) cleanTitle = titleHint;
const cleanTitle = cleanTitleForSearch(titleHint);
const searchQuery = encodeURIComponent(cleanTitle);
// Try TV first, then movie
for (const mediaType of ['tv', 'movie'] as const) {
const searchRes = await fetch(
`https://api.themoviedb.org/3/search/${mediaType}?api_key=${tmdbKey}&query=${searchQuery}&page=1`
Expand All @@ -97,9 +173,13 @@ export async function fetchTmdbData(imdbId: string, titleHint?: string): Promise
}
}

if (!tmdbId) return EMPTY;
if (!tmdbId) {
// Cache the miss too (avoid repeated lookups for non-existent content)
await setCache(cacheKey, EMPTY);
return EMPTY;
}

// Step 2: Get credits + release info in one call
// Step 2: Get credits + release info
let tagline: string | null = null;
let cast: string | null = null;
let writers: string | null = null;
Expand Down Expand Up @@ -139,7 +219,12 @@ export async function fetchTmdbData(imdbId: string, titleHint?: string): Promise
}
}

return { posterUrl, backdropUrl, overview, tagline, cast, writers, contentRating, tmdbId };
const result: TmdbData = { posterUrl, backdropUrl, overview, tagline, cast, writers, contentRating, tmdbId };

// Cache the result
await setCache(cacheKey, result);

return result;
} catch {
return EMPTY;
}
Expand Down
65 changes: 55 additions & 10 deletions src/lib/streaming/streaming.ts
Original file line number Diff line number Diff line change
Expand Up @@ -303,8 +303,9 @@ function getMemoryThresholds(): { warning: number; critical: number; severe: num
}

const { warning: MEMORY_WARNING_THRESHOLD, critical: MEMORY_CRITICAL_THRESHOLD, severe: MEMORY_SEVERE_THRESHOLD } = getMemoryThresholds();
const MEMORY_CHECK_INTERVAL_MS = 30000; // Check every 30 seconds
const MEMORY_CHECK_INTERVAL_MS = 15000; // Check every 15 seconds
const TORRENT_MIN_AGE_MS = 30 * 60 * 1000; // Don't cleanup torrents younger than 30 minutes
const TORRENT_MIN_AGE_CRITICAL_MS = 5 * 60 * 1000; // Shortened to 5 minutes under critical/severe pressure
const ORPHAN_TORRENT_MAX_AGE_MS = 2 * 60 * 60 * 1000; // Auto-cleanup torrents with 0 watchers after 2 hours

/**
Expand Down Expand Up @@ -344,7 +345,7 @@ export class StreamingService {
ensureDir(this.downloadPath);
this.options = options;

this.maxConcurrentStreams = options.maxConcurrentStreams ?? 10;
this.maxConcurrentStreams = options.maxConcurrentStreams ?? 4;
this.streamTimeout = options.streamTimeout ?? 120000;
Comment on lines +348 to 349
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constructor default for maxConcurrentStreams was changed to 4, but the StreamingServiceOptions JSDoc still documents a different default value. Please update the documentation (or align the default) so callers and tests don’t rely on outdated defaults.

Copilot uses AI. Check for mistakes.
this.torrentCleanupDelay = options.torrentCleanupDelay ?? DEFAULT_CLEANUP_DELAY;
this.activeStreams = new Map();
Expand Down Expand Up @@ -724,6 +725,7 @@ export class StreamingService {
});
this.killOldestStreams(Math.max(3, Math.floor(this.activeStreams.size / 2)));
this.emergencyCleanup();
this.aggressiveCleanup('severe');
} else if (memUsage.rss >= MEMORY_CRITICAL_THRESHOLD) {
logger.error('CRITICAL memory pressure - triggering emergency cleanup', {
rssMB,
Expand All @@ -732,6 +734,7 @@ export class StreamingService {
activeStreams: this.activeStreams.size,
});
this.emergencyCleanup();
this.aggressiveCleanup('critical');
} else if (memUsage.rss >= MEMORY_WARNING_THRESHOLD) {
logger.warn('High memory pressure - triggering aggressive cleanup', {
rssMB,
Expand All @@ -747,19 +750,60 @@ export class StreamingService {
* Kill the oldest N streams to free memory during severe pressure
*/
private killOldestStreams(count: number): void {
// NEVER kill active streams. Users are watching these — killing them mid-playback
// is a terrible UX. If memory is truly critical, systemd's MemoryMax will handle it.
// Emergency/aggressive cleanup already removes idle torrents without active watchers.
logger.warn('killOldestStreams called but SKIPPING — active streams are protected', {
requestedKill: count,
activeStreams: this.activeStreams.size,
// Only kill streams that have NO active watchers (nobody is watching).
// Active watchers = someone has an SSE connection open for this torrent.
// This protects users mid-playback while still freeing unwatched resources.
const unwatchedStreams: Array<[string, ActiveStream]> = [];
Comment on lines 752 to +756
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

killOldestStreams behavior changed materially (it now terminates streams/torrents under memory pressure). Since this file already has extensive unit tests, please add focused tests covering these memory-pressure cleanup paths (e.g., watched torrents aren’t killed, unwatched victims are chosen deterministically, and destroying a torrent cleans up all streams for that infohash).

Copilot uses AI. Check for mistakes.

for (const [id, stream] of this.activeStreams) {
const watcherInfo = this.torrentWatchers.get(stream.infohash);
const watcherCount = watcherInfo?.watchers.size ?? 0;
if (watcherCount === 0) {
unwatchedStreams.push([id, stream]);
}
}

if (unwatchedStreams.length === 0) {
logger.warn('killOldestStreams: all streams have active watchers — skipping to protect playback', {
requestedKill: count,
activeStreams: this.activeStreams.size,
});
return;
}

// Sort by creation time (oldest first) and kill up to `count`
const toKill = unwatchedStreams.slice(0, count);
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

killOldestStreams says it will sort by creation time, but unwatchedStreams.slice(0, count) uses insertion order without sorting. Since ActiveStream has createdAt, sort unwatchedStreams by stream.createdAt (or by torrentAddedAt) before slicing so the “oldest” streams are actually targeted.

Suggested change
const toKill = unwatchedStreams.slice(0, count);
const sortedUnwatched = unwatchedStreams.slice().sort((a, b) => {
const aCreated = a[1].createdAt;
const bCreated = b[1].createdAt;
if (aCreated == null || bCreated == null) {
// If creation time is missing, keep original relative order
return 0;
}
if (aCreated < bCreated) return -1;
if (aCreated > bCreated) return 1;
return 0;
});
const toKill = sortedUnwatched.slice(0, count);

Copilot uses AI. Check for mistakes.
for (const [id, stream] of toKill) {
logger.warn('Killing unwatched stream under memory pressure', {
streamId: id,
infohash: stream.infohash,
});
// Destroy the stream's torrent
const torrent = (this.client?.torrents ?? []).find(t => t.infoHash === stream.infohash);
if (torrent) {
(torrent.destroy as (opts: { destroyStore: boolean }, callback?: (err: Error | null) => void) => void)(
{ destroyStore: true },
() => {
this.deleteTorrentFolder(torrent.name, stream.infohash).catch(() => {});
}
);
}
this.activeStreams.delete(id);
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before deleting this.torrentWatchers for an infohash, clear any pending cleanupTimer on that entry. Otherwise the scheduled cleanup callback will still run later with stale state (and can log misleading messages / do redundant work).

Suggested change
this.activeStreams.delete(id);
this.activeStreams.delete(id);
const watcherInfo = this.torrentWatchers.get(stream.infohash);
if (watcherInfo?.cleanupTimer) {
clearTimeout(watcherInfo.cleanupTimer);
// Avoid holding onto a stale timeout reference
(watcherInfo as { cleanupTimer?: NodeJS.Timeout | undefined }).cleanupTimer = undefined;
}

Copilot uses AI. Check for mistakes.
this.torrentWatchers.delete(stream.infohash);
this.torrentAddedAt.delete(stream.infohash);
}
Comment on lines +781 to +794
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic destroys the underlying torrent for a single ActiveStream (torrent.destroy(...)), but multiple activeStreams can share the same infohash (e.g., different files from the same torrent). Destroying the torrent here will break those other streams, and the code doesn’t remove/close the other activeStreams entries for that infohash. Consider selecting victims at the torrent level (by infohash) and, when destroying a torrent, also destroy/remove all streams that reference that infohash (including calling activeStream.stream.destroy()).

Copilot uses AI. Check for mistakes.

logger.warn('killOldestStreams completed', {
killed: toKill.length,
skippedWatched: unwatchedStreams.length - toKill.length + (this.activeStreams.size - unwatchedStreams.length),
remaining: this.activeStreams.size,
Comment on lines +796 to +799
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

skippedWatched is computed after mutating this.activeStreams (you delete entries in the loop). Because unwatchedStreams.length was captured before deletions but this.activeStreams.size is post-deletion, this value can be wrong/negative. Compute watchedCount / unwatchedCount and killedCount before deleting, then log those stable counts.

Copilot uses AI. Check for mistakes.
});
}

/**
* Aggressive cleanup - remove torrents with no active watchers
*/
private aggressiveCleanup(): void {
private aggressiveCleanup(pressureLevel: 'warning' | 'critical' | 'severe' = 'warning'): void {
let cleaned = 0;
let skippedYoung = 0;
const now = Date.now();
Expand All @@ -772,7 +816,8 @@ export class StreamingService {

// Skip young torrents — they may be actively downloading or transcoding
const addedAt = this.torrentAddedAt.get(infohash) ?? 0;
if (addedAt && (now - addedAt) < TORRENT_MIN_AGE_MS) {
const minAge = pressureLevel === 'warning' ? TORRENT_MIN_AGE_MS : TORRENT_MIN_AGE_CRITICAL_MS;
if (addedAt && (now - addedAt) < minAge) {
skippedYoung++;
continue;
}
Expand Down
33 changes: 33 additions & 0 deletions supabase/migrations/20260303030000_tmdb_data.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
-- Cache table for TMDB API responses
-- Avoids repeated API calls for the same content
CREATE TABLE IF NOT EXISTS tmdb_data (
-- Lookup key: either an IMDB ID (tt1234567) or a cleaned title hash
lookup_key text PRIMARY KEY,
tmdb_id integer,
poster_url text,
backdrop_url text,
overview text,
tagline text,
cast_names text,
writers text,
content_rating text,
-- Track freshness
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now()
);

Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated_at is indexed for stale-entry cleanup, but this migration doesn’t add a trigger to keep updated_at current on updates/upserts. Other tables in this repo use an update_updated_at_column() trigger; consider adding a BEFORE UPDATE trigger for tmdb_data so upserts refresh updated_at and cleanup logic can rely on it.

Suggested change
-- Keep updated_at current on updates/upserts
CREATE TRIGGER set_tmdb_data_updated_at
BEFORE UPDATE ON tmdb_data
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();

Copilot uses AI. Check for mistakes.
-- Index for cleanup of stale entries
CREATE INDEX idx_tmdb_data_updated_at ON tmdb_data (updated_at);

-- Allow the app to read/write
ALTER TABLE tmdb_data ENABLE ROW LEVEL SECURITY;

-- Public read/write (no user-scoping needed, this is shared cache)
CREATE POLICY "tmdb_data_public_read" ON tmdb_data FOR SELECT USING (true);
Comment on lines +25 to +26
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CREATE POLICY ... FOR SELECT USING (true) makes the entire shared TMDB cache readable to anyone who has table SELECT privileges (often anon/authenticated in Supabase setups). If that’s not explicitly intended, restrict reads to service_role (or to authenticated users / an RPC that only returns a single key) to avoid easy scraping and unbounded data exposure.

Suggested change
-- Public read/write (no user-scoping needed, this is shared cache)
CREATE POLICY "tmdb_data_public_read" ON tmdb_data FOR SELECT USING (true);
-- Restrict direct table reads/writes to service_role; clients should use controlled RPCs
CREATE POLICY "tmdb_data_service_read" ON tmdb_data
FOR SELECT
USING (auth.role() = 'service_role');

Copilot uses AI. Check for mistakes.
CREATE POLICY "tmdb_data_service_write" ON tmdb_data
FOR ALL
USING (auth.role() = 'service_role')
WITH CHECK (auth.role() = 'service_role');
Comment on lines +25 to +30
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says “Public read/write”, but the policies only allow public read; writes are restricted to service_role. Please update the comment to match the actual policy intent (or adjust the policies if public write was intended).

Copilot uses AI. Check for mistakes.

COMMENT ON TABLE tmdb_data IS 'Cache for TMDB API responses to reduce API calls. Entries keyed by IMDB ID or title hash.';
COMMENT ON COLUMN tmdb_data.lookup_key IS 'IMDB ID (e.g. tt1234567) or sha256 of cleaned title for non-IMDB lookups';
Loading