Reading this blog post scared me a bit. The use case I proposed was building a "simple" RAG chatbot for some (~50 confluence docs and somewhat growing) on elasticsearch and another process that my team handles. I was just planning on using a stack like streamlit, text-embedding-3-small,FAISS for the vector store and it to be driven by a python script.
Didn't seem too expensive or too hard based on the handful of queries my team would be using it for, and it was a "low hanging fruit" pain point for my team that I thought could be improved by a RAG chatbot. That on top of the fact that Atlassian Rovo did not do a good job of not going to external sources when we had the answer in our existing internal docs.
I think you're operating in a scale that is small enough that there's little risk.
You'll be able to iterate if you run into anything that doesn't work. You should however be clear on what problem you and your team are solving, and not just "get some rag".
Sure - I neglected to include the pain point itself. Right now we spend a large amount of time during troubleshooting of a problem (incident) or when working features related to these two systems, and heavily rely on our existing internal documentation. Rather than combing through tons of those docs, a RAG chatbot made sense to me and the team seems to agree. Will move forward- thanks for the input.
Didn't seem too expensive or too hard based on the handful of queries my team would be using it for, and it was a "low hanging fruit" pain point for my team that I thought could be improved by a RAG chatbot. That on top of the fact that Atlassian Rovo did not do a good job of not going to external sources when we had the answer in our existing internal docs.
Am I still on the right path?