FAQ
Functional FAQ
How are permissions managed?
When a user manually uploads a document/transcript or website, he will only have permission to view and ask a question on that content.
When forwarding an email - all users who received the original mail will have access to that email
When integrating systems like SharePoint, Teams chat, and Google Drive, BusinessGPT syncs the permission from the source system so only users who have access to the source system can ask questions or get answers from that content.
If permissions are changed in the source system, BusinessGPT syncs them back to the dashboard.
At which point are permissions applied
Every user can only see items he has access to and, therefore, can only ask questions on content he has access to.
When asking a question about all company data, the system will find the most relevant documents that the user can access, and only these documents will be sent to the AI to generate the answer.
How does the AI Firewall detect bias, and is there a specific mechanism to support ethical usage?
We don’t detect Bias. Instead we give the company the tools to manage the risks of Hallucinations and Bias by enforcing policies measuring the risks of different use cases. This allows the company to decide for which use case they are OK with potential Bias/ Hallucinations issues.
In addition- when asking on your company data – you reduce / remove Bias because the answers are based on your data and not pretrained data.
Technical FAQ
Where can Business GPT be hosted?
You can deploy it in your own AWS tenant or on-prem.
Where is our data stored?
In a database on your servers. It never gets sent to other servers.
How do you provide answers to questions without sending our data to the cloud?
We run a local LLM that is totally standalone and doesn’t require internet connectivity to be used. We currently use Meta’s Llama3 model, which is open source and free for products with less than 700 million users.
What hardware does the product require?
It requires one Windows server and two Linux servers as well as at least one GPU with a minimum of 24 GB of VRAM. More details here.
Do you train LLM models with customer data?
No, we store customer data in segregated vector databases and use the RAG methodology to provide answer generation.
Which components use the SQL server?
The Dashboard, Ingestor and Gateway
Is the Ingestor service developed by AGAT?
Yes.