Offline llama3 sends corrections back to Meta's server – I was not aware of it
1 point by jeena 1 year ago | 5 comments- raverbashing 1 year agoWait
This person is asking the model (running on Ollama) what it does?
The model answer might have a significance when running on FB infra, but here it is meaningless. Even worse at higher temperatures
They need to check Ollama source for that
They're doing no better than people asking Chatgpt if they wrote that assignment paper they got
- jeena 1 year agoOh man. Ok, TIL that I'm not better than all the old people on Facebook believing what the scammers tell them.
- jeena 1 year ago
- reneberlin 1 year agoI think there is a clear misunderstanding how LLM-things work and that a network request has nothing to do with a LLM-model. Even if "function calling" is possible, it is the users choice what function can be called and if it does a network request it is totally the users side of the implementation what URI and request-body gets sent.
It feels a bit like trolling. I somehow can't believe this is meant seriously.
- okokwhatever 1 year agommm!!! so it's gonna be necessary to deny some hosts...
Do you have a list of the hosts callbacked?
- jeena 1 year agoNo, not yet, I'd need to set up something to check.
- jeena 1 year ago