Architected and integrated self-hosted enterprise RAG system using Langchain (early adoption), vector databases, and agent frameworks to provide intelligent access to company knowledge base.
Support team received many requests requiring developer knowledge. Information scattered across multiple platforms. No unified way to access company documentation. Early-stage embedding models were error-prone.
Designed and integrated RAG system using Langchain immediately after its release with Azure OpenAI Service. Integrated multiple company platforms into Weaviate vector database. Implemented multi-modal embeddings (text and image) for comprehensive semantic search. Architected self-hosted solution for data privacy and compliance. Evaluated and migrated between embedding models: started with gte-large, migrated to qwen3-embedding for improved performance.
Reduced support requests to Development team by 50%. Semantic search dramatically outperformed traditional Confluence keyword search, with employees providing feedback that the chatbot found far more relevant information than Confluence's search function. Deployed internally and actively used across company. Enabled self-service for developer-level questions. First production LLM application integrated at company.