Setting Up Traefik Proxy with PostgreSQL and pgAdmin in Docker Compose

Learn how to configure Traefik Proxy with PostgreSQL and pgAdmin using Docker Compose. Set up entry points, manage dependencies, and optimize database connections efficiently.

Author

Faiz Ahmed Farooqui
Faiz Ahmed FarooquiPrincipal Technical Consultant.

Date

Mar 17, 2025

Table of Contents

In this post, I'll demonstrate how to include Traefik Proxy — a cloud native application proxy — in our Docker Compose file and use it in our architecture with PostgreSQL and pgAdmin service containers.

I've previously covered Traefik's principles in prior blogs —

As a baseline, I'm assuming you're familiar with DockerDocker Compose & Traefik.

Let's get started —

I'll share the Docker Compose file, followed by a full analysis of what every line in our template represents.

Docker Compose

Details (of the content mentioned in Docker Compose file)

  • Replace localhost with your own domain or sub-domain in traefikpostgres & pgadmin services

  • Change config vars against these environment vars in postgres service — POSTGRES_USERPOSTGRES_PASSWORD & POSTGRES_DB

  • Change config vars against these environment vars in pgadmin service — PGADMIN_DEFAULT_EMAILPGADMIN_DEFAULT_PASSWORD & SCRIPT_NAME

  • Run docker-compose.yml file by using the command docker-compose up -d

  • Your pgAdmin server should be up and running. Visiting http://localhost:1337/_admin/pgadmin, will show you your pgAdmin portal

  • You'll need to register a server in pgAdmin after you successfully login into it with the configs values you used in your docker-compose.yml file - just remember to use pg_container as the hostname

  • I'm using official docker images for PostgreSQL & pgAdmin

Analysing PostgreSQL service

  • The container name is your database hostname, from our template the database hostname is pg_container

  • depends_on: [traefik] means we want to wait till traefik service is up

  • ports exposes our postgres to accept incoming requests with the postgres port running inside your docker container

  • volumes mapping helps us to keep our data within the database to persist in our local machine. So, next time when we run our docker-compose from the same location - persisted data from the directory ./postgres-data will be utilised by our postgres service container and it won't redo all the installation steps

  • In volumes, you see I have commented out this line — # - ./init.sql:/docker-entrypoint-initdb.d/init.sql

    • If you would like to do additional initialization in an image derived from this one, add one or more *.sql*.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary)

    • After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service

    • Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup

    • One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue on with your scripts

  • There are lots of other environment vars available in postgres docker image, you can find their references here - postgres

  • I do healthcheck with postgres container because it takes almost a minute to initialise for the first time and starts accepting connection requests once postgres container is up & this might break my backend apps which would need database connection in their start scripts

  • You can find healthcheck references from the docker-compose's official documentation

Analysing pgAdmin service

  • depends_on: postgres This statement means that pgAdmin service depends on postgres and waits until the healthy service status is received from the same container and this is how one should use the postgres depends in their backend app services as well.

  • If not done as above, your backend will definitely fail most of the times which is not good for your final product

  • Almost all of the configs speaks for itself except env var SCRIPT_NAME

  • Since I have traefik - I wanted to host my pgAdmin app on some route & here's how it's done

    • Firstly, to do this I had to move pgadmin on some route & you can do it by setting the SCRIPT_NAME variable

    • Once that's done, now rewriting routes uri using traefik labels to /_admin/pgadmin will do the job

Summary

After reading this article, you will have a fundamental grasp on how to setup Traefik Proxy with PostgreSQL and pgAdmin in your Docker Compose.

And I strongly advise everyone to have the "health check" docker-compose's index in your postgres service container to avoid unnecessary initialisation failures.

Source: This blog is authored by Faiz Ahmed, Principal Technical Consultant at GeekyAnts. Originally published on Hashnode: Read here.

SHARE ON

Related Articles.

More from the engineering frontline.

Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

How We Built an AI System That Automates Senior Solution Architect Workflows
Article

Apr 6, 2026

How We Built an AI System That Automates Senior Solution Architect Workflows

Discover how we built a 4-agent AI co-pilot that converts complex RFPs into draft technical proposals in 15 minutes — with built-in conflict detection, assumption surfacing, and confidence scoring.

AI Code Healer for Fixing Broken CI/CD Builds Fast
Article

Apr 6, 2026

AI Code Healer for Fixing Broken CI/CD Builds Fast

A deep dive into how GeekyAnts built an AI-powered Code Healer that analyzes CI/CD failures, summarizes logs, and generates code-level fixes to keep development moving.

A Real-Time AI Fraud Decision Engine Under 50ms
Article

Apr 2, 2026

A Real-Time AI Fraud Decision Engine Under 50ms

A deep dive into how GeekyAnts built a real-time AI fraud detection system that evaluates transactions in milliseconds using a hybrid multi-agent approach.

Building an Autonomous Multi-Agent Fraud Detection System in Under 200ms
Article

Apr 1, 2026

Building an Autonomous Multi-Agent Fraud Detection System in Under 200ms

GeekyAnts built a 5-agent fraud detection pipeline that makes decisions in under 200ms — 15x cheaper than single-model systems, with full explainability built in.

Building a Self-Healing CI/CD System with an AI Agent
Article

Mar 31, 2026

Building a Self-Healing CI/CD System with an AI Agent

When code breaks a pipeline, developers have to stop working and figure out why. This blog shows how an AI agent reads the error, finds the fix, and submits it for review all on its own.

Maestro Automation Framework — Advanced to Expert
Article

Mar 26, 2026

Maestro Automation Framework — Advanced to Expert

Master Maestro at scale. Learn architecture, reusable flows, CI/CD optimization, and how to eliminate flakiness in production-grade mobile automation.Master Maestro at scale. Learn architecture, reusable flows, CI/CD optimization, and how to eliminate flakiness in production-grade mobile automation.

Scroll for more
View all articles