58000ERRORTier 3 — Handle with care⚠️ MEDIUM confidencecould not open file
🔴 Production Risk Error
If 58000 occurs when accessing relation files rather than COPY files, it indicates heap or index file loss, which is a data loss event requiring immediate recovery from backup. Check pg_class to map the OID in the error message to a table name.
What this means
Postgres could not open a file needed for a database operation. This includes relation data files, WAL segments, configuration files, or external files referenced by COPY or pg_read_file(). The OS returned an error (ENOENT, EACCES, EMFILE, etc.).
Why it happens
- 1A relation file (heap or index) has been deleted or moved outside of Postgres
- 2COPY FROM FILENAME referencing a file that does not exist or is not readable by the postgres OS user
- 3Too many open file descriptors (EMFILE): max_files_per_process or OS ulimit is too low
- 4Permissions on the data directory or a tablespace directory changed
- 5A tablespace symlink is broken after a migration or storage reconfiguration
How to reproduce
A COPY command references a server-side file that does not exist.
COPY users FROM '/tmp/nonexistent_file.csv' CSV HEADER;Fix 1: Verify the file path and permissions
When using COPY FROM/TO with a server-side file.
-- Check if the file exists and is readable (run as postgres OS user):
-- ls -la /tmp/nonexistent_file.csv
-- Use COPY with stdin for client-side files (avoids server file access):
-- psql -c "\COPY users FROM '/local/path/file.csv' CSV HEADER"
-- (\COPY streams from the client, no server file access needed)Why this works
COPY FROM with a bare filename opens the file on the server as the postgres OS user. \COPY (the psql meta-command) streams the file from the client over the protocol connection, bypassing server-side file access entirely. \COPY is safer and more portable for client-side files.
Fix 2: Increase the open file descriptor limit
When EMFILE errors indicate file descriptor exhaustion.
-- Check current setting:
SHOW max_files_per_process;
-- Increase in postgresql.conf:
-- max_files_per_process = 1000
-- Also increase OS ulimit for the postgres user:
-- ulimit -n 65536 (in the postgres service start script)
SELECT pg_reload_conf();Why this works
Postgres tracks open file descriptors per backend in the virtual file descriptor (VFD) layer. max_files_per_process limits how many VFDs a single backend opens simultaneously. When this limit is reached Postgres closes and reopens files as needed; if the OS ulimit is too low, open() fails with EMFILE.
What not to do
Run Postgres as root to bypass file permission errors
Why it's wrong: Running a network service as root is a critical security vulnerability; fix permissions instead.
Sources
📚 Official docs: https://www.postgresql.org/docs/current/errcodes-appendix.html
📚 Feature docs: https://www.postgresql.org/docs/current/sql-copy.html
🔧 Source ref: src/backend/storage/file/fd.c — FileOpen()
📖 Further reading: COPY
Confidence assessment
⚠️ MEDIUM confidence
The COPY file scenario is well-documented. Relation file loss scenarios are well-understood but rare and recovery is highly environment-dependent. Confidence for COPY cases is HIGH; confidence for relation file loss is MEDIUM due to recovery complexity.
See also
🔗 Related errors
📄 Reference pages