What’s the best way to get related context to feed the prompt using similarity search?

  1. Go to the vector search
  2. Gather three to five chunks from the vector search.
  3. Feed them into the LLM for realization
  4. Ensure a proper response.

If you weren’t able to select the correct chunks, you can:

  • Extract information from the question to refine the search
  • Apply keywords in the indexing

For example, let’s assume you have Uber’s financial reports for 2021 and 2022. If a user asks a question about Uber’s financials in 2022, you can extract “2022” and ‘Uber” from the question and use them as a keyword search to filter down only Uber’s financial document.

If you don’t have a 2022 document but you do have one that discusses how Lyft compared to Uber or Uber’s financials in 2023, vector search will be able to pick that one up. While the details won’t be the most accurate for summarization, refinement using keywords and understanding of the text could help improve accuracy .

Need help?

Contact our team of experts or ask a question in the community.

Have a question?

Submit your questions on machine learning and data science to get answers from out team of data scientists, ML engineers and IT leaders.