Strata AI Documentation
  • Welcome to the future
  • Getting Started
    • Quickstart
    • Installation
    • LLM API
    • Tools Configuration
  • High Level Guides
    • Inter-Agent Communication
    • Development
    • Serialization & Breakpoint Recovery
    • Implementation Logic
    • RAG Module
    • Environment
    • Integrations
  • API Documentation
    • Initial Documentation
Powered by GitBook
On this page
  1. High Level Guides

RAG Module

Overview

The RAG (Retrieval-Augmented Generation) module enhances the capabilities of Large Language Models (LLMs) by allowing them to reference external knowledge bases. This approach ensures:

  • Improved precision and relevance in responses.

  • Access to domain-specific knowledge without requiring model retraining.

  • Practical and authoritative outputs.

Features

The RAG module in Strata AI provides the following functionalities:

  1. Data Input:

    • Supports multiple file formats (e.g., PDF, DOCX, MD, CSV, TXT, PPT).

    • Handles Python objects directly.

  2. Retrieval:

    • Supports Faiss, BM25, ChromaDB, ElasticSearch, and mixed retrieval methods.

  3. Post-Retrieval:

    • Includes advanced re-ranking methods like LLM Rerank, ColbertRerank, CohereRerank, and ObjectRerank for accurate data prioritization.

  4. Data Updates:

    • Allows addition and modification of text and Python objects.

  5. Data Storage and Recovery:

    • Saves vectorized data to avoid re-vectorization during future queries.


Preparation

Installation

Install the RAG module using the following commands:

# From PyPI
pip install strataai[rag]

# From source
pip install -e .[rag]

Note: Some modules, like ColbertRerank, require additional manual installation. For example, install llama-index-postprocessor-colbert-rerank for ColbertRerank.

Embedding Configuration

Set up embeddings in your configuration file:

# Example for OpenAI
embedding:
  api_type: "openai"
  base_url: "YOUR_BASE_URL"
  api_key: "YOUR_API_KEY"
  dimensions: "MODEL_DIMENSIONS"

You can also configure embeddings for Azure, Gemini, or Ollama. For additional types like HuggingFace or Bedrock, use the embed_model field in from_docs or from_objs functions.

Optional: Omniparse Configuration

To optimize PDF parsing, configure Omniparse:

omniparse:
  api_key: 'YOUR_API_KEY'
  base_url: 'YOUR_BASE_URL'

Omniparse is optional. If configured, it is used exclusively for PDF files.


Key Functionalities

1. Data Input

Example 1.1: Files or Directories

import asyncio
from strataai.rag.engines import SimpleEngine

async def main():
    engine = SimpleEngine.from_docs(input_files=["path/to/file.txt"])
    answer = await engine.aquery("What does Bob like?")
    print(answer)

if __name__ == "__main__":
    asyncio.run(main())

Example 1.2: Python Objects

from pydantic import BaseModel
from strataai.rag.engines import SimpleEngine

class Player(BaseModel):
    name: str
    goal: str

async def main():
    objs = [Player(name="Jeff", goal="Top One")]
    engine = SimpleEngine.from_objs(objs=objs)
    answer = await engine.aquery("What is Jeff's goal?")
    print(answer)

if __name__ == "__main__":
    asyncio.run(main())

2. Retrieval

Example 2.1: Faiss Retrieval

from strataai.rag.engines import SimpleEngine
from strataai.rag.schema import FAISSRetrieverConfig

async def main():
    engine = SimpleEngine.from_docs(
        input_files=["path/to/file.txt"],
        retriever_configs=[FAISSRetrieverConfig()]
    )
    answer = await engine.aquery("What does Bob like?")
    print(answer)

if __name__ == "__main__":
    asyncio.run(main())

Example 2.2: Hybrid Retrieval

from strataai.rag.schema import BM25RetrieverConfig

async def main():
    engine = SimpleEngine.from_docs(
        input_files=["path/to/file.txt"],
        retriever_configs=[FAISSRetrieverConfig(), BM25RetrieverConfig()]
    )
    answer = await engine.aquery("What does Bob like?")
    print(answer)

if __name__ == "__main__":
    asyncio.run(main())

3. Post-Retrieval

Example 3.1: LLM Re-Ranking

from strataai.rag.schema import LLMRankerConfig

async def main():
    engine = SimpleEngine.from_docs(
        input_files=["path/to/file.txt"],
        retriever_configs=[FAISSRetrieverConfig()],
        ranker_configs=[LLMRankerConfig()]
    )
    answer = await engine.aquery("What does Bob like?")
    print(answer)

if __name__ == "__main__":
    asyncio.run(main())

4. Data Updates

Example 4.1: Add Text and Objects

engine.add_docs(["path/to/new_file.txt"])
engine.add_objs([Player(name="Mike", goal="Top Three")])

5. Data Storage and Recovery

Example 5.1: Persist and Reload

persist_dir = "./tmp_storage"
engine = SimpleEngine.from_docs(input_files=["path/to/file.txt"])
engine.persist(persist_dir)

engine = SimpleEngine.from_index(persist_path=persist_dir)
answer = await engine.aquery("What does Bob like?")
print(answer)

PreviousImplementation LogicNextEnvironment

Last updated 4 months ago