Solving Sudoku in Python Packaging
305 points by Yenrabbit 8 months ago | 53 comments- simonw 8 months agoI love this so much. I dug around a bit and figured out how it works - I have an explanation (with an illustrative diagram) here: https://simonwillison.net/2024/Oct/21/sudoku-in-python-packa...
Figuring out how it works is a great way to learn a bit more about how Python packaging works under the hood. I learned that .whl files contain a METADATA file listing dependency constraints as "Requires-Dist" rules.
I ran a speed comparison too. Using the uv pip resolver it took 0.24s - with the older pip-compile tool it took 17s.
- TeMPOraL 8 months agoTangent, but I wondered what libuv had to do with speeding up Python packaging, and it turns out nothing. I wonder why someone choose to name a pip replacement in a way that effectively collides with several tools and libraries across many languages...
- giancarlostoro 8 months agoI agree.. While I think it looks amazing, it's a poor naming choice.
- giancarlostoro 8 months ago
- seanw444 8 months agoWow, uv really is fast.
- jebebeebehhe 8 months agoAs is simonw writing that post in under 60m assuming he first saw the concept here on HN.
- jebebeebehhe 8 months ago
- zahlman 8 months agoPeople keep trying to sell the speed of such solutions as a killer feature for uv, but I think I must not be anywhere near the target audience. The constraint-solving required for the sorts of projects I would typically work on is not even remotely as complex, while I'm bottlenecked by a slow, unreliable Internet connection (and the lack of a good way to tell Pip not to check PyPI for new versions and only consider what's currently in the wheel cache).
- the_mitsuhiko 8 months ago> while I'm bottlenecked by a slow, unreliable Internet connection (and the lack of a good way to tell Pip not to check PyPI for new versions and only consider what's currently in the wheel cache).
Which is one of the reasons why uv is so fast. It reduces the total times it needs to go to PyPI! Not only does it cache really well, it also hits PyPI more efficiently and highly parallel. Once you resolved once, future resolutions will likely bypass PyPI for the most part entirely.
- zahlman 8 months agoOh, that's good to hear. The performance discourse around uv seems to revolve around "written in Rust! Statically compiled!" all the time, but an algorithmic change like that is something that could conceivably make its way back into Pip. (Or perhaps into future competing tools still written in Python. I happen to have a design of my own.)
- zahlman 8 months ago
- kvdveer 8 months agoOur CI took 2 minutes to install the requirements. Adding UV dropped that to seconds. Now most time is spent on running tests, instead of installing requirements.
Of course we could've cached the venv, but cache invalidation is hard, and this is a very cheap way to avoid it.
- zahlman 8 months agoOr you could cache the solve, surely? Simply explicitly including your transitive dependencies and pinning everything should do the trick - but there is work being done actively on a lockfile standard, it's just the sort of topic that attracts ungodly amounts of bikeshedding.
(One of these days I'll have to figure out this "CI" thing. But my main focus is really just solving interesting programming problems, and making simple elegant tools for mostly-small tasks.)
- zahlman 8 months ago
- simonw 8 months agoMore significant than the speed improvement in my opinion is the space saving.
The reason uv is fast is that it creates hard links from each of your virtual environments to a single shared cached copy of the dependencies (using copy-on-write in case you want to edit them).
This means that if you have 100 projects on your machine that all use PyTorch you still only have one copy of PyTorch!
- zahlman 8 months agoThis is definitely a feature I wanted to have in my own attempt (hopefully I can start work on it before the end of the year; I think I'll bump up my mental priority for blogging about the design). I also have considered symlink (not sure if this really works) and .pth file-based approaches.
(TIL `os.link` has been supported on Windows since 3.2.)
- zahlman 8 months ago
- itsbjoern 8 months agoPersonally I’m just a fan of people improving dev tooling, regardless of it ultimately making a huge difference to my workflow. I haven’t used uv yet, but I’m still tangentially following it because despite pip and poetry being great tools I have had my fair share of grievances with them.
- yunohn 8 months agoUsing uv (IME) should be a drop-in replacement for almost all of your python packaging needs. I highly recommend giving it a shot - it’s saved me measurable time in just a few months.
- yunohn 8 months ago
- the_mitsuhiko 8 months ago
- ilyagr 8 months agoHow does it encode the idea of having all the numbers on each line/square?
- c10n3x 8 months ago[dead]
- TeMPOraL 8 months ago
- visarga 8 months agoThat's why it feels like installing a ML repo is like sudoku. You install everything and at the last step you realize your neural net uses FlashAttention2 which only works on NVIDIA compute version that is not deployed in your cloud VM and you need to start over from scratch.
- hskalin 8 months agoSometimes I just change the version of the package in requirements to fit with others and pray that it works out (a few times it does)
- pjc50 8 months agoSee the discussion on why sqlite insists on vendoring its build dependencies as far as possible and not using, say, CMake.
- austinjp 8 months agoThis describes the day I wasted on Monday before I gave up and wrote some damn deterministic code instead of using some damn AI.
- nicman23 8 months agohonestly if the ml does not have a docker image - not compose no build an image- i do not even bother any more
- anthk 8 months agoGuix fixes that in the spot.
- hskalin 8 months ago
- chatmasta 8 months agoHere’s the same thing in Poetry (2022): https://www.splitgraph.com/blog/poetry-dependency-resolver-s...
- teschmitt 8 months agoWas just about to say: I've seen this before but building it with a universally usable requirements.txt is even cooler.
- teschmitt 8 months ago
- echoangle 8 months ago> Solving the versions of python package from your requirements is NP-complete, in the worst case it runs exponentially slow. Sudokus are also NP-complete, which means we can solve sudokus with python packaging.
Is that actually sufficient? Can every system that’s solving something that’s NP-complete solve every other NP-complete problem?
- tzs 8 months ago> Can every system that’s solving something that’s NP-complete solve every other NP-complete problem?
Others have given the answer (yes) and provided some links. But it is nice to have an explanation in thread so I'll have a go at it.
The key idea is the idea of transforming one problem to another. Suppose you have some problem X that you do not know how to solve, and you've got some other problem Y that you do know how to solve.
If you can find some transform that you can apply to instances of X that turns them into instances of Y and that can transform solutions of those instances of Y back to solutions of X, then you've got an X solver. It will be slower than your Y solver because of the work to transform the problem and the solution.
Now let's limit ourselves to problems in NP. This includes problems in P which is a subset of NP. (Whether or not it is a proper subset is the famous P=NP open problem).
If X and Y are in NP and you can find a polynomial time transformation that turns X into Y then in a sense we can say that X cannot be harder than Y, because if you know how to solve Y then with that transformation you also know how to solve X albeit slower because of the polynomial time transformations.
In 1971 Stephen Cook proved that a particular NP problem, boolean satisfiability, could serve as problem Y for every other problem X in NP. In a sense then no other NP problem can be harder than boolean satisfiability.
Later other problems were also found that were universal Y problems, and the set of them was called NP-complete.
So if Python packaging is NP-complete then every other NP problem can be turned into an equivalent Python packaging problem. Note that the other problem does not have to also be NP-complete. It just has to be in NP.
Sudoku and Python Packaging both being NP-complete means it goes both ways. You can use a Python package solver to solve your sudoku problems and you can use a sudoku solver to solve your Python packaging problems.
- zahlman 8 months ago>Can every system that’s solving something that’s NP-complete solve every other NP-complete problem?
Yes, by definition (https://en.wikipedia.org/wiki/NP-completeness , point 4).
- rolisz 8 months agoYes, NP complete means that every other NP problem is reducible to it.
- arjvik 8 months agoI think for Sudoku to be NP-Complete, it needs to be generalized to arbitrary board sizes (at the very least)
- tetha 8 months agoCorrect, if the complexity guys talk about NP-complete sudoku, it's always about solving Sudokus of an arbitrary, but fixed and finite size.
The problem class of "Solve an arbitrary Sudoku of Size 9" might even be constant runtime, since it's a finite set to search through.
- SJC_Hacker 8 months agoDoes the Sudoku size have to be perfect square, or can other sizes exist?
I suppose you could leave some blank and make the squares the next largest perfect square
- blharr 8 months agoI looked it up, and there are 6.6e21 9x9 sudokus, which I wouldn't consider constant time practically
- SJC_Hacker 8 months ago
- tetha 8 months ago
- empath75 8 months agohttps://www.youtube.com/watch?v=6OPsH8PK7xM
This video I think makes it obvious why that's true in a pretty intuitive way. I posted it a few days ago as a link and it never got traction.
SAT is the equivalent of being able to find the inverse of _any_ function, because you can describe any function with logic gates (for obvious reasons), and any collection of logic gates that describes a function is equivalent to a SAT problem. All you need to do is codify the function in logic gates, including the output you want, and the ask a SAT solver to find the inputs that produce that output.
- tzs 8 months ago
- yochem 8 months agoNo way pip actually is a really inefficient SAT solver!
- stabbles 8 months agoFor a long time it was not because there was no backtracking.
Now it is just an exhaustive, recursive search: for the current package try using versions from newest to oldest, enqueue its dependencies, if satisfied return, if conflict continue.
- taeric 8 months agoIf there was no backtracking, that implies it couldn't solve every sudoku? That is rather amusing with the implication that it couldn't solve every dependency, as well?
- taeric 8 months ago
- fernandotakai 8 months agouv actually talks about this in their resolver docs https://docs.astral.sh/uv/reference/resolver-internals/
- stabbles 8 months ago
- alentred 8 months agoThis is BRILLIANT ! I knew of a trend to implement lots of different things at compile-time (in Scala and Haskell communities at least) - definitely fun and quirky, but it never seemed that "special". This one, it has an air of old-school computer magic around it, probably because it is so elegant and simple.
- mi_lk 8 months agoSee also this 2008 post using Debian package system to solve Sudoku:
https://web.archive.org/web/20160326062818/http://algebraict...
- ziofill 8 months agobut how does it know the constraints?
- thangngoc89 8 months agoThis is the content of sudoku_0_0-1-py3-none-any.whl. So when the (0,0) cell is 1, none of the cells in the same row, column and subgrid should be 1.
Requires-Dist: sudoku_0_1 != 1 Requires-Dist: sudoku_0_2 != 1 Requires-Dist: sudoku_0_3 != 1 Requires-Dist: sudoku_0_4 != 1 Requires-Dist: sudoku_0_5 != 1 Requires-Dist: sudoku_0_6 != 1 Requires-Dist: sudoku_0_7 != 1 Requires-Dist: sudoku_0_8 != 1 Requires-Dist: sudoku_1_0 != 1 Requires-Dist: sudoku_2_0 != 1 Requires-Dist: sudoku_3_0 != 1 Requires-Dist: sudoku_4_0 != 1 Requires-Dist: sudoku_5_0 != 1 Requires-Dist: sudoku_6_0 != 1 Requires-Dist: sudoku_7_0 != 1 Requires-Dist: sudoku_8_0 != 1 Requires-Dist: sudoku_0_1 != 1 Requires-Dist: sudoku_0_2 != 1 Requires-Dist: sudoku_1_0 != 1 Requires-Dist: sudoku_1_1 != 1 Requires-Dist: sudoku_1_2 != 1 Requires-Dist: sudoku_2_0 != 1 Requires-Dist: sudoku_2_1 != 1 Requires-Dist: sudoku_2_2 != 1
- jsnell 8 months agoThe constraints are going to be static and independent of the puzzle. So I expect they're encoded in the package dependencies. So for example version 1 of the package sudoku_0_0 will conflict with all of: version 1 of sudoku_[0-8]_0; version 1 of sudoku_0_[0-8]; version 1 of [012]_ [012].
- roywiggins 8 months agogenerate_packages makes it moderately clear:
https://github.com/konstin/sudoku-in-python-packaging/blob/m...
- roywiggins 8 months ago
- IshKebab 8 months agoYeah they missed out the actual interesting bit from the readme...
- thangngoc89 8 months ago
- worewood 8 months agoThis is type of cool hacking I like to see. Kudos! (Or better, Sukodus :) )
- niyonx 8 months agoHow did you even think of that? Nice!
- revskill 8 months agoThis is a hack.
- jessekv 8 months agoAnd why I come here for... er, news.
- jessekv 8 months ago
- anthk 8 months agoNow, in MicroLisp, Common Lisp and maybe Emacs' Elisp too: