Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Arquitectura Cómo Instalar

How to Install PostgreSQL with pgvector Step by Step

How to Install PostgreSQL with pgvector Step by Step

Actualizado: 2026-05-03

Setting up PostgreSQL[1] with pgvector[2] is the most sensible way to start a retrieval-augmented generation project without introducing a new database into the team’s stack. This guide describes a reproducible install on Debian 12 or Ubuntu 22.04/24.04 with PostgreSQL 16 and pgvector 0.6, justifying each decision and operating with the same tools already known: pg_dump, pg_basebackup, pg_stat_statements, and the familiar replica machinery.

Why pgvector Instead of a Dedicated Vector Database

The temptation to reach for Pinecone, Weaviate, Milvus, or Qdrant is natural. All are competent, but each introduces a new system that must be deployed, backed up, monitored, and learned. For volumes below fifty million vectors with workloads mixing semantic search and relational filters, pgvector is usually the economically correct decision.

The honest trade-off: pgvector doesn’t compete on raw performance with engines written specifically for vectors, nor on advanced features like product quantisation or native hybrid filtering. If the application lives and dies by latency over hundreds of millions of documents, it deserves a fresh evaluation. Below that threshold, running a single system the team already knows pays dividends every time a backup has to be restored at three in the morning.

Step 1: Official PGDG Repository

PostgreSQL versions shipped by distributions lag behind the stable branch. The PGDG repository offers PostgreSQL 16 alongside an up-to-date postgresql-16-pgvector package:

bash
sudo apt install -y curl ca-certificates gnupg

curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc 
  | sudo gpg --dearmor -o /etc/apt/keyrings/pgdg.gpg

echo "deb [signed-by=/etc/apt/keyrings/pgdg.gpg] 
  https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" 
  | sudo tee /etc/apt/sources.list.d/pgdg.list

sudo apt update && sudo apt install -y postgresql-16 postgresql-client-16

The service starts automatically as postgresql@16-main.service.

Step 2: Install pgvector

The postgresql-16-pgvector package from PGDG ships the latest released version:

bash
sudo apt install -y postgresql-16-pgvector

Unlike many extensions, pgvector doesn’t require shared_preload_libraries. It’s per-database, not cluster-wide.

Step 3: User, Database, and Extension

Never use the postgres role for the application. Create a dedicated role:

sql
CREATE ROLE ragapp WITH LOGIN PASSWORD 'strong_password_here';
CREATE DATABASE ragdb OWNER ragapp;

c ragdb
CREATE EXTENSION IF NOT EXISTS vector;
GRANT ALL ON SCHEMA public TO ragapp;

-- Verify installation
SELECT * FROM pg_extension WHERE extname = 'vector';
-- Should return extversion = '0.6.0'

Step 4: Minimal Production Configuration

Adjust postgresql.conf for a machine with 16 GB dedicated to the database:

ini
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 512MB
wal_level = replica
max_wal_size = 4GB
max_connections = 100

maintenance_work_mem matters especially because CREATE INDEX on HNSW consumes it generously. With 512 MB the index builds in reasonable time; with 64 MB it can take hours.

Step 5: IVFFlat or HNSW

IVFFlat: older, partitions space into lists via k-means. Requires ANALYZE after loading representative data. Low memory, reasonable for a few million vectors.

HNSW: hierarchical graph, better recall-latency trade-off, incremental inserts without quality loss. Higher build time and memory. Stable since 0.6.

The recommendation is to start with HNSW with defaults (m = 16, ef_construction = 64). In most cases quality beats IVFFlat without tuning. If the dataset exceeds tens of millions of rows or the maintenance window is tight, IVFFlat with lists set to the square root of row count is appropriate.

Step 6: Typical Schema and Distance Operators

sql
CREATE TABLE docs (
    id          bigserial PRIMARY KEY,
    tenant_id   integer NOT NULL,
    content     text,
    metadata    jsonb,
    embedding   vector(1536)
);

CREATE INDEX ON docs USING gin (metadata);
CREATE INDEX ON docs USING hnsw (embedding vector_cosine_ops)
  WITH (m = 16, ef_construction = 64);

For searches:

sql
SET hnsw.ef_search = 100;

SELECT id, content, embedding <=> $1 AS distance
FROM docs
WHERE tenant_id = $2
ORDER BY embedding <=> $1
LIMIT 10;

Three operators available: <=> for cosine, <-> for Euclidean, <#> for negative inner product. With OpenAI models (normalised vectors), use cosine. Mixing operators with non-normalised vectors is the most common source of strange results.

Step 7: Access, Backup, and Observability

Remote access: by default the server listens on localhost. If the application lives on another host, open listen_addresses to the strictly necessary private interface and enforce scram-sha-256 in pg_hba.conf. Never expose PostgreSQL directly to the internet.

Backup: pg_dump -Fc -Z9 is enough on day one. When the database grows, adopt pgbackrest or restic with incremental rotations and rehearse restores regularly. A backup without a tested restore is a hope, not a guarantee.

Minimum observability: enable pg_stat_statements, watch buffer-cache hit ratio (must stay above 99%), and export metrics with postgres_exporter to Prometheus. HNSW indexes degrade if autovacuum doesn’t keep pace with deletes and updates.

Final Verification

sql
INSERT INTO docs (tenant_id, content, embedding)
VALUES (1, 'Test document',
        (SELECT array_agg(random())::vector(1536) FROM generate_series(1,1536)));

SELECT id, content FROM docs
ORDER BY embedding <=> (SELECT embedding FROM docs LIMIT 1)
LIMIT 5;

If the query returns results without error and EXPLAIN ANALYZE shows the HNSW index, the installation is correct.

Conclusion

Installing PostgreSQL with pgvector is not complicated, but each decision matters more than it looks. Picking the PGDG repository, creating a dedicated role, running the extension in the right database, building the HNSW index at the right moment, and sizing maintenance_work_mem properly mark the difference between a deployment that ages gracefully and one that demands manual intervention every few weeks.

The argument for pgvector over dedicated alternatives is operational more than technical. Few organisations genuinely benefit from introducing a vector-specific database when they already run PostgreSQL comfortably. Most discover that apparent performance savings are spent, and then some, on duplicated backup, replication, and monitoring complexity. When the time to migrate to a specialised engine arrives, the starting point will be a production database — which usually makes postponing the migration the right call.

Frequently asked questions

Which PostgreSQL version is compatible with pgvector?

pgvector is compatible with PostgreSQL 12 or higher. To use HNSW indexes (available since pgvector 0.5.0), PostgreSQL 14+ is recommended for better performance with heavy vector workloads.

How many dimensions does pgvector support?

pgvector supports vectors of up to 16,000 dimensions. The most common embedding models use 384, 768, or 1536 dimensions, well below the limit.

When should I use pgvector instead of a dedicated vector database?

If you already use PostgreSQL and your collection has fewer than one million vectors, pgvector is usually sufficient. For larger collections or low-latency ANN search at scale, Qdrant or Weaviate may be more appropriate.

Was this useful?
[Total: 15 · Average: 4.7]
  1. PostgreSQL
  2. pgvector

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.