Back to all recipes
Database Recipe

Backup a PostgreSQL database

This recipe shows how to back up a PostgreSQL database using pg_dump. The dump is streamed to STDOUT, encrypted locally by the Backup Verified agent, uploaded as an encrypted backup, and later available for download, decryption, and restore.

What this recipe is for

This recipe is a practical starting point for backing up a PostgreSQL database with a logical dump. It works well for many common environments where a portable, database-native export is the right fit.

The workflow is straightforward: pg_dump writes backup data to STDOUT, the Backup Verified agent reads that stream, encrypts it locally, and uploads the encrypted result to Managed Storage.

That gives you a clean backup workflow while preserving a practical path to download, decrypt, inspect, and restore the database later.

Good fit for

  • Application databases
  • Routine logical backups
  • Portable PostgreSQL recovery workflows
  • Scheduled daily or frequent backups

Not ideal for

  • Very large environments that need different PostgreSQL backup strategies
  • Cases that require more advanced cluster-wide planning beyond a single database dump

Before you begin

  • Install the Backup Verified agent.
  • Make sure pg_dump is installed and available on your system.
  • Create or obtain your bv-agent.yml config.
  • Use a PostgreSQL account with permission to read the database you want to back up.

Why this works

pg_dump writes its backup output to STDOUT by default, which makes it a natural fit for Backup Verified. You do not need to dump to a local file first unless you intentionally want that extra step.

The agent reads the dump stream, encrypts it locally, and uploads the encrypted result. That keeps the workflow clean, scriptable, and aligned with a privacy-preserving backup model.

The recipe

Use a PostgreSQL dump command that creates a practical backup stream and keeps the output on STDOUT.

Step 1: Use this dump command

pg_dump \
  -Fc \
  -Z 6 \
  -U USER \
  -d DB_NAME

Replace USER and DB_NAME with values for your environment.

Step 2: Keep credentials out of the command line when possible

Avoid putting passwords directly in the command. Use PostgreSQL’s normal secure credential methods so the command stays clean and easier to automate.

The point is not just to get one backup working. It is to create a workflow you can trust and repeat.

Suggested BV config

This example uses pg_dump as the backup command and gives the backup a clear identity inside Backup Verified.

# bv-agent.yml
bv:
  api_base: "https://backupverified.com"
  timeout_seconds: 30
  work_timeout_seconds: 0
  upload_timeout_seconds: 0

agent_key: "YOUR_AGENT_KEY"
client_encryption_key_b64: "YOUR_CLIENT_ENCRYPTION_KEY_B64"

backup:
  source_key: "postgres_app_db"
  name: "PostgreSQL App Database"
  description: "Logical dump of PostgreSQL database"
  delete_after_days: 0

source:
  type: "postgresql"
  backup_command: >
    pg_dump
    -Fc
    -Z 6
    -U USER
    -d DB_NAME

The > after backup_command: is YAML formatting for multi-line text. It is not shell output redirection.

How to run it

bv-agent validate-config -config bv-agent.yml
bv-agent backup -config bv-agent.yml

Validate first, then run the backup. If the dump succeeds and the upload completes, the encrypted backup should appear in your portal.

What success looks like

  • The agent completes without error.
  • Your PostgreSQL backup appears in the Backup Verified portal.
  • You can later download the encrypted file and decrypt it locally.
  • The decrypted output is a valid PostgreSQL dump you can inspect or restore.

What could go wrong

PostgreSQL logical backups are reliable for many environments, but there are still a few common failure points worth checking.

Credential or connection problems

If the PostgreSQL user cannot connect or does not have access to the target database, the dump will fail before the backup can complete.

Wrong database name

A typo in DB_NAME can lead to failure or a backup of the wrong target.

Assuming backup means restore is proven

Confidence grows when you actually download, decrypt, and validate a real dump instead of assuming everything is fine.

The goal is not just to create a backup artifact. It is to build confidence that your backup exists, can be retrieved later, and can be turned back into useful data.

How to download it later

  1. 1. Sign in to your Backup Verified portal.
  2. 2. Open the PostgreSQL backup entry you want.
  3. 3. Use the download option to retrieve the encrypted backup file.
  4. 4. Save it somewhere convenient for the decrypt step.

The downloaded file is still encrypted. That is expected.

Why that matters

Your dump is stored encrypted. Backup Verified does not need the plaintext database contents in order to store the backup.

Decryption happens locally with your own key material, which keeps control with you.

How to decrypt it locally

Once you have downloaded the encrypted backup file, use the agent to decrypt it locally.

bv-agent decrypt --in backup.bin.enc --out ./restore/ -config bv-agent.yml

Replace backup.bin.enc with your actual downloaded filename. The --out path is where the decrypted result will be written.

Then inspect the output

ls -lah ./restore/

Confirm that the decrypted output is present and looks like the PostgreSQL dump you intended to protect.

Good habit

Early on, do a real download-and-decrypt test instead of assuming everything is fine. That one step builds real confidence quickly.

What restore means for this recipe

For a PostgreSQL logical dump, restore usually means importing the decrypted backup with PostgreSQL restore tooling. The exact restore command depends on how you manage your environment and where you want the data restored.

In practice, many people first decrypt the backup, inspect it, and restore into a test or recovery database before using it in production.

pg_restore \
  -U USER \
  -d DB_NAME \
  decrypted_dump.dump

The exact filename and restore target will depend on your environment and workflow.