/
Large Scale Embedding
Large Scale Embedding
How to allow using the system in parallel to mass embedding of data
The memory available should be enough to contain the data for building indexes.
Docker compose override example:
pgvector:
shm_size: 58G
deploy:
resources:
limits:
memory: 75G
reservations:
memory: 2G
This allows the process to take advantage of available memory, otherwise it will not use all the memory provided.
Related content
Embedding Estimates Nov 2024
Embedding Estimates Nov 2024
More like this
On-prem System Requirements
On-prem System Requirements
More like this
PgVector Scaling
PgVector Scaling
More like this
Pricing/ Speed
Pricing/ Speed
More like this
AWS Private Cloud Instance System Requirements
AWS Private Cloud Instance System Requirements
More like this
Deployment system sizing
Deployment system sizing
More like this