Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)

135 points by dvrp 3 weeks ago | 17 comments
  • phildini 2 weeks ago
    I got very briefly excited that this might be a new application layer on top of meshtastic.
    • robertlagrant 2 weeks ago
      Yes! I don't know what LoRA is, but I know what it isn't.
    • jph00 2 weeks ago
      The paper link on that site doesn't work -- here's a working link:

      https://arxiv.org/abs/2506.06105

      • smcleod 2 weeks ago
        Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
        • porridgeraisin 2 weeks ago
          Mostly whatever the earliest version pytorch supports. While 3.9 is supported until the end of this year, torch wheels and other wheels in the ecosystem were always troublesome in 3.9. So 3.10 it is.

          3.9 would have been the preferred version if not for those issues, simply because it is the default on MacOS.

          • smcleod 2 weeks ago
            Yikes those a both very old. Python pre 3.12 had some serious performance issues. You should be aiming to run the current stable version which will contain any number of stability and interoperability fixes. The bundled OS python versions are often far behind and better suited to running the basic tools rather than being used for every application or script that you run where ideally you'd use a python version manager and isolated virtual environment.
            • porridgeraisin 2 weeks ago
              ML folks don't care (yes yes I'm generalizing...); They will upgrade whenever torch or one of their other favourite libraries tells them to.
          • electroglyph 2 weeks ago
            from pyproject.toml: requires-python = ">= 3.10"

            I still see quite a few people in the ML world using 3.10 as their default...probably just habit, but a closer look at the dependencies might answer your question better.

            • smcleod 2 weeks ago
              Ah well that's not as bad I guess, I saw in their readme they're recommending people use 3.10, which when I see it is often a bit of a red flag that the project in question may not be well maintained, but I agree I do see quite a few ML repos still noting the use of 3.10 to this day.
          • watkinss 2 weeks ago
            Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
            • kixiQu 2 weeks ago
              Can someone explain why this would be more effective than a system prompt? (Or just point me to it being tested out against that, I supposed)
              • gdiamos 2 weeks ago
                An alternative to prefix caching?
                • etaioinshrdlu 2 weeks ago
                  What is such a thing good for?
                  • npollock 2 weeks ago
                    LoRA adapters modify the model's internal weights
                    • make3 2 weeks ago
                      not unless they're explicitly merged, which is not a requirement but a small speed only thing
                      • jsight 2 weeks ago
                        Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.
                      • normal01081975 2 weeks ago
                        [dead]
                        • dvrp 3 weeks ago
                          [flagged]
                          • 2 weeks ago
                          • vessenes 2 weeks ago
                            Sounds like a good candidate for an mcp tool!