/
Large Scale Embedding

Large Scale Embedding

How to allow using the system in parallel to mass embedding of data

The memory available should be enough to contain the data for building indexes.

Docker compose override example:

pgvector:
shm_size: 58G
deploy:
resources:
limits:
memory: 75G
reservations:
memory: 2G

 

This allows the process to take advantage of available memory, otherwise it will not use all the memory provided.

Related content