Deployment Steps

Required Components

 

Items highlighted in Yellow only need to be installed by the customer for on-prem/Private Cloud deployments

 

Component

AI Firewall

Private AI

Component

AI Firewall

Private AI

1

Browser Extention or Proxy

Yes

No

2

Dashboard / Ingestor

Yes

Yes

3

Gateway

Yes, no GPU required

Yes, GPU required

4

Local LLM1

Yes

Yes

5

Document uploading agent

No

Optional

1 - Local LLM can be substituted with Open AI or AWS Bedrock API usage..

System Requirements

Private Cloud - AWS Private Cloud Instance System Requirements.

On-Prem - On-prem System Requirements .

AGAT can install both of the above.

AGAT manages these components for SAAS deployments.

 

Components - Technical Summary

Browser Extension

The BusinessGPT browser extension needs to be deployed to your end users' browsers. See here how to deploy How to deploy the BusinessGPT browser extension

Proxy

We recommend consulting with AGAT before planning a proxy solution.

For a POC, it is easier to start with the browser extension.

See here how to deploy BusinessGPT AI Firewall Proxy

Dashboard / Ingestor

The user and management interface is usually collocated with the Ingestor services on a Windows server.

AGAT will provide an installer to install the services and IIS Dashboard website.

Gateway

The Gateway is a set of docker containers deployed on Linux.

These containers perform functions including processing document and other content for embedding and storage, processing chat bot queries, AI Firewall processing and Vector and regular database storage.

A set of configuration is provided that includes a docker compose file that downloads all of the relevant containers and configures them with default settings.

This server should have a Nvidia GPU with relevant CUDA drivers installed, for creating the embeddings of the company data. GPU is not required if only using AI Firewall capabilities, as no company data needs to be embedded for storage.

Local LLM

This is deployed as a docker container on Linux.

This server should have a Nvidia GPU with relevant CUDA drivers installed.

A docker compose configuration file is provided that will download the relevant AI model and offer the Gateway API access.

Document uploading agent

If there are local copies of documents that a customer wishes to bulk upload to Private AI, they may optionally use the Uploading Agent service which is installed in a location with access to the documents.

Configuration

Each of the above components needs to be configured to work with each other.

Guides are provided to assist in this process.

Deployment Process - Optionally managed by AGAT

  1. Open network access for BGPT servers to any locally hosted resources. E.g. Confluence.

  2. Create Dashboard and Gateway instances

  3. Create Databases

  4. Dashboard

    1. Install Dashboard services

  5. Gateway

    1. Deploy docker containers using “Docker Compose” script.

    2. Configure DB connection string

    3. Configure LLM VPC IP

  6. Deploy AGAT LLM Amazon Image

  7. Configure Load Balancers to provide external access to Dashboard

  8. Configure Dashboard to access company data stores (Google Drive, Confluence, etc)

Deployment Time Guidelines

Below are timelines of the customer/partner to expect when deploying BusinessGPT

Local = On-premises or private customer cloud (VPC)

 

AI Firewall SaaS

AI Firewall Local

Private AI SaaS

Private AI Local

Resource needed

 

AI Firewall SaaS

AI Firewall Local

Private AI SaaS

Private AI Local

Resource needed

Initial environment setup

0

1 day

System

Extension deployment

3 hours

N/A

 

Configure authentication

2 hour

Azure / AD

Connect sources

N/A

N/A

2-3 hours per source

Source admin Sharepoint/Confluence etc

Analyse governance needs and Configure classification rules and policies

1 day

Risk/ compliance / CISO teams

Building shared chats

N/A

1 day

 

Training

1 day

 

Testing

1 day