Ask HN: Avoiding irrelevant or undesirable model context in RAG

1 point by davidajackson 1 year ago | 1 comment
Considering a RAG prompt say of the format:

Answer question <q> using context <c>.

Say <c> is a contradictory statement to what the model was trained on.

There are certain situations where one wants the llm to use model knowledge, and some where one does not. Is there any formal research in this area?

  • Ephil012 1 year ago
    At my company, we developed an open source library to measure if the context the model received is accurate or not. While not exactly the same as what you're asking, you could in theory use it to measure when an LLM deviates from the context to tweak the LLM to not always use the provided context.

    Shameless plug for the library: https://github.com/TonicAI/tvalmetrics