Microsoft Phi-3 Cookbook
152 points by nonfamous 1 year ago | 57 comments- xkgt 1 year agoLooks like some of the docs are generated by an llm. I see pictures with typos and imagined terms, incomplete texts etc., I wonder to what extent we can trust rest of the docs.
https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.F...
- tmm84 1 year agohttps://github.com/microsoft/Phi-3CookBook/commit/ba688b9a35...
Scroll down to the end and the removed text is totally suspect. I wouldn't be to surprised if all of this was generated by an LLM then anything strange was edited by a human. Another reason not to leave everything to the LLM.
- sgerenser 1 year agoGool for specific tucks!
- luke-stanley 1 year agoGood for specific tasks?
- luke-stanley 1 year ago
- tmm84 1 year ago
- simonw 1 year agoYou can interact with the new Phi-3 vision model on this page (no login required): https://ai.azure.com/explore/models/Phi-3-vision-128k-instru...
- Dowwie 1 year ago"We are introducing Phi Silica which is built from the Phi series of models and is designed specifically for the NPUs in Copilot+ PCs. Windows is the first platform to have a state-of-the-art small language model (SLM) custom built for the NPU and shipping inbox. Phi Silica API along with OCR, Studio Effects, Live Captions, Recall User Activity APIs will be available in Windows Copilot Library in June. More APIs like Vector Embedding, RAG API, Text Summarization will be coming later."
2024: the year of personal computers with neural processing units running small language models
How do NPU's work? Who builds them and how are they built? Are they capable of running a variety of SLM-like firmware?
- killingtime74 1 year agoFor the price of these new laptops one can already buy a MacBook with a general purpose GPU that is more than capable of running these models. One can buy a windows laptop with a dedicated graphics card that can also run these models. Perhaps it would be interesting if the price was substantially lower.
- TiredOfLife 1 year agoThese new laptops are half the price of macbooks with the same ram and storage
- runjake 1 year agoIn which configs and which geographic locations? In the US, pricing for equivalent Surface and Mac hardware configurations looks the same, to me.
And that's if the Snapdragon X Elite is actually on par with the Apple M3, like Microsoft claims.
Earlier X Elite benchmarks[1][2] showed that it was behind the M3, but hopefully Qualcomm have made some solid performance changes since then. Competition is good.
1. https://www.xda-developers.com/snapdragon-x-elite-benchmarks...Note: the link is comparing vs the M2
2. https://www.tomshardware.com/pc-components/cpus/early-snapdr...
- 3abiton 1 year agoBut not same performance. One of the reason QC not releasing anything but controlled benchmarks is most likely the subpar performance of Windows on ARM. This will be the biggest hurdle for QC Elite chips, competing with M-seried which is design in tandem with MacOS.
- killingtime74 1 year agoIn Australia they are the same price ($1.9k AUD vs $1.6k for MacBook air). How much where you're at?
- runjake 1 year ago
- dhruvdh 1 year agoThe NPU runs this Silica model at 1.5 watts. MacBooks cannot even drive multiple monitors in this price range.
- adastra22 1 year agoThe MacBooks have an NPU too. Just nobody has done anything with them.
- adastra22 1 year ago
- TiredOfLife 1 year ago
- killingtime74 1 year ago
- refulgentis 1 year agoThe initial model release had a terrible, frequent, issue with emitting the wrong "end of message" token, or never emitting one.[1] That is a very serious issue that breaks chat.
The ones from today still have this issue.[2]
Beyond that, they've been pushing new ONNX features enabling LLMs via Phi for about a month now. The ONNX runtime that supports it still isn't out, much less the downstream integration of it into the iOS/Android runtimes. Heck, the Python package for it isn't supported anywhere but Windows.
It's absolutely wild to me that MS is pulling this stuff with ~0 discussion or reputation repercussions.
I'm a huge ONNX fan and bet a lot on it, it works great. It was clear to me about 4 months ago that Wintel's "AI PC" buildup meant "ONNX x newer Phi"
It is very frustrating to see an extremely late rush, propped up by potemkin blog posts that I have to waste time to find out are just straight up lying. Burnt a lot of goodwill that they worked hard to earn.
I am virtually certain that the new Windows AI features previewed about yesterday are going to land horribly if they actually try to land them this year.
[1] https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf... [2] https://x.com/jpohhhh/status/1793003272187351195
- andy_xor_andrew 1 year agoin the screenshot you shared in the twitter link [2], the model appears to do everything correctly - it terminated its message with <|end|> which is correct according to the Phi-3 prompt format. It seems whatever environment you are hosting it in does not understand that <|end|> should be considered the EOS string, and not <|endoftext|> ??
- refulgentis 1 year agoGood call, it's an HF space for Phi Vision, maybe someone jumped the gun / didn't set stuff up properly, or it's splitting <|end|> into multiple tokens
- refulgentis 1 year ago
- nonfamous 1 year agoFYI, there's a recipe for running Phi-3 under ONNX on iOS in the linked repository https://github.com/microsoft/Phi-3CookBook/blob/main/md/03.I...
- refulgentis 1 year agoYes, that's the Potemkin village that broke this camel's back. It was linked in an announcement blog post yesterday.
You have to build two in-development libraries, one from ToT, one of which is a dev branch to make it compile for iOS on a Mac temporarily.
The dev branch doesn't actually exist.
If you use the only branch by the author on the repo, it doesn't work.
The dev branch that doesn't work is a few commits on top of ToT from 2 months ago.
At the end of that non-existent road is a model that can't end messages properly, in MyThing.app that uses llama.cpp, or LM Studio, or Ollama, or MS cloud API.
I can't ship on that, and neither can anyone else.
- refulgentis 1 year ago
- andy_xor_andrew 1 year ago
- pseudosavant 1 year agoIt looks like the Phi-3 Vision model isn't available in GGUF or ONNX. I was hoping there was a GGUF I could use with llamafile.
- zb3 1 year agoThe bigger news is that Phi-3-Small, Phi-3-Medium and Phi-3-Vision were finally released
- refulgentis 1 year ago"finally"!? Vision wasn't even on the table until today! And they're clearly rushed and fundamentally broken for chat use cases.
- refulgentis 1 year ago
- mark_l_watson 1 year agoI installed Phi:medium last night on my Mac using Ollama and, subjectively, it looks good. I was surprised of the claim the it was better than mistral-8x7B.
I largely ignore benchmarks now, but on the other hand, while trying many models myself is easy for simple tests, really using a LLM for an application is a lot of work.
- nashashmi 1 year agoSlightly off topic: what’s the reasonably smallest LLM model i can use to do language processing and rewriting of a large library of word documents? For the purposes of querying information and regurgitating out summaries or detailed information?
My use case is very simple: take 1000 word documents filled with two to three pages of information and pictures. And then output a set of requested information via prompting. Is there something off the shelf? Or do I have to make this?
- ukuina 1 year agoSounds like a good RAG use-case unless all 1k documents need to be comprehended simultaneously.
Look at H2O.ai: https://github.com/h2oai/h2ogpt
- ukuina 1 year ago
- jpdus 1 year agoWow, actually this cookbook is really bad? I expected something like the OpenAI or Anthropic cookbooks, but this seems to be some AI generated low-quality content without any code examples or interesting examples?
The Phi-3 models are great though, especially the vision model has great potential for low latency applications (like robotics?)...
- ukuina 1 year agoYeah, this is an INSTALL.md masquerading as a cookbook.
- ukuina 1 year ago
- GaggiX 1 year agohttps://huggingface.co/collections/microsoft/phi-3-6626e15e9..., all of these models except Phi-3 mini are new.
- free_bip 1 year agoLooking at the benchmarks, it seems like Phi-3 Small (7B) marginally beats out Llama3-8B on most tasks, which is pretty exciting!
- zone411 1 year agoOn my benchmark (NYT Connections), Phi-3 Small performs well (8.4) but Llama 3 8B Instruct is still better (12.3). Phi-3 Medium 4k is disappointing and often fails to properly follow the output format.
- Filligree 1 year agoHave you found either model to be good enough to do anything interesting, reliably?
- ukuina 1 year agoLlama3-8B is adequate for non-technical summarization or simple categorization.
- ukuina 1 year ago
- ashu1461 1 year agoIt also seems to be comparable to gpt 3.5 turbo which I feel hard to believe. People have obviously found out a way around these benchmarks.
- zone411 1 year ago
- free_bip 1 year ago
- FezzikTheGiant 1 year agoWas playing around with this model - why does it return 2 or 3 responses when I ask it for one? I asked it for a json response and it generates 2 or 3 at a time. What's with this.
- mixdatsalt 1 year ago[flagged]