Offline llama3 sends corrections back to Meta's server – I was not aware of it

1 point by jeena 1 year ago | 5 comments
  • raverbashing 1 year ago
    Wait

    This person is asking the model (running on Ollama) what it does?

    The model answer might have a significance when running on FB infra, but here it is meaningless. Even worse at higher temperatures

    They need to check Ollama source for that

    They're doing no better than people asking Chatgpt if they wrote that assignment paper they got

    • jeena 1 year ago
      Oh man. Ok, TIL that I'm not better than all the old people on Facebook believing what the scammers tell them.
    • reneberlin 1 year ago
      I think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if "function calling" is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.

      It feels a bit like trolling. I somehow can't believe this is meant seriously.

      • okokwhatever 1 year ago
        mmm!!! so it's gonna be necessary to deny some hosts...

        Do you have a list of the hosts callbacked?

        • jeena 1 year ago
          No, not yet, I'd need to set up something to check.