Extracting memorized pieces of books from open-weight language models

109 points by fzliu 2 weeks ago | 109 comments
  • ai_legal_sus 2 weeks ago
    I feel like role-playing as a lawyer, I'm curious how would you defend against this in court?

    I don't think anyone denies that frontier models were trained on copyrighted material - it's well documented and public knowledge. (and a separate legal question regarding fair-use and acquisition)

    I also don't think anyone denies that a model that strongly fits the training data approximates the copy-paste function. (Or at the very least, if A then B, consistently)

    In practice, training resembles lossy compression of the data. Technically one could frame an LLM as a database of compressed training inputs.

    This paper argues and demonstrates that "extraction is evidence of memorization" which affirms the above.

    In terms of LLM output (the valuable product customers are paying for) this is familiar, albeit grey, legal territory.

    https://en.wikipedia.org/wiki/Substantial_similarity

    When a customer pays for an AI service, they're paying for access to a database of compressed training data - the additional layers of indirection sometimes produce novel output, and many times do not.

    Unless you advocate for discarding the whole regime of intellectual property or you can argue for a better model of IP laws, the question stands: why shouldn't LLM services trained on copyrighted material be held responsible when their product violates "substantial similarity" of said copyrighted works? Why should failure to do so be immune from legal action?

    • protocolture 2 weeks ago
      I wonder however, if this paper might imply the answer.

      "But the results are complicated: the extent of memorization varies both by model and by book. With our specific experiments, we find that the largest LLMs don't memorize most books -- either in whole or in part. However, we also find that Llama 3.1 70B memorizes some books, like Harry Potter and 1984, almost entirely."

      I wonder if we could exclude the full text of these books from the training data and still approximate this result? Harry Potter and 1984 are probably some of the most quoted texts on the internet.

      >Unless you advocate for discarding the whole regime of intellectual property or you can argue for a better model of IP laws, the question stands: why shouldn't LLM services trained on copyrighted material be held responsible when their product violates "substantial similarity" of said copyrighted works? Why should failure to do so be immune from legal action?

      I think you are on the right track but for me personally it really depends on how difficult it was to produce the result. Like if you enter "spit out harry potter and the philosophers stone" and it does. Thats black and white. But if you are able to torture a repeated prompt that forces the model to ignore its constraints, thats not exactly using the system as intended.

      I just tried ChatGPT:

      >I can’t provide the full text of Harry Potter, as it’s copyrighted material. However, I can summarize it, discuss specific scenes or characters, or help analyze the themes or writing style if that’s useful. Let me know what you're after.

      For my money, as long as the AI companies treat the reproduction of copyrighted material as a failure state, the nature of the training data is irrelevant.

      • friendzis 2 weeks ago
        > I think you are on the right track but for me personally it really depends on how difficult it was to produce the result. Like if you enter "spit out harry potter and the philosophers stone" and it does. Thats black and white. But if you are able to torture a repeated prompt that forces the model to ignore its constraints, thats not exactly using the system as intended.

        Let me offer a different perspective. Having an LLM that is trained on copyrighted material, memoized (or lossily compressed it) and then some "safety" machinery that tries to avoid verbatim-ish outputs of copyrighted material is fundamentally not really distinguishable from simply having a plaintext database of copyrighted material with machinery for "fuzzy" data extraction from said material.

        Suppose a company stores the whole of stack exchange in plaintext, then implements a chat-like interface that fuzzy matches on question, extracts answers from plain-text database, fuzzes top-rated/accepted answers together and outputs something, not necessarily quoting one distinct answer, but pretty damn close.

        How much "fuzziness" is required for this to stop being copyright violation? LLM-advocates try to say that LLMs are "fuzzy enough" without clearly defining what that enough means.

        • protocolture 2 weeks ago
          >Let me offer a different perspective. Having an LLM that is trained on copyrighted material, memoized (or lossily compressed it) and then some "safety" machinery that tries to avoid verbatim-ish outputs of copyrighted material is fundamentally not really distinguishable from simply having a plaintext database of copyrighted material with machinery for "fuzzy" data extraction from said material.

          Right so sort of like a search engine that caches thumbnails of copyrighted images to display quick search results? Something I have been using for years and have no issues with, where the legal arguments are framed more about where the links go, and how easy the search engine makes it for me to acquire the original image?

          • NewsaHackO 2 weeks ago
            Would your argument be the same if it was a human? If a person memorizes a book verbatim, however uses safety/common sense not the transcribe the book for others because it is a copyright infringement disallow him from using the information memorized whatsoever because he can duplicate it?
        • brudgers 2 weeks ago
          I'm curious how would you defend against this in court?

          If by “you” you mean Google or OpenAI or Microsoft, etc., you use your much much deeper pockets to pay lawyers to act in your interests.

          All authors, publishers, etc. are outgunned. Firepower is what resolves civil cases in one party’s favor and a day in court is easily a decade or more away.

          • EarlKing 2 weeks ago
            Deep pockets are not a get out of jail free card. If a case escalates to the SCOTUS there will be many firms that submit amicus curiae outlining their position on the matter and how it threatens their rights. Those people, arguably, represent more money and influence than Google, OpenAI, Microsoft, etc. So if we accept the premise that all legal matters are decided on a basis of pure politics as mediated by money, then ultimately every court battle is a battle to assert that your actions don't actually affect the interests of interested parties and that you'll fight them if they try to assert otherwise, and on that count it is reasonable to surmise that there are more interested parties with deeper pockets than any firm or firms fielding LLMs that might be caught up in a lawsuit over this.

            Ultimately, if an author can demonstrate protectable expression has been incorporated into an AI's training set and is emitted by said AI, no matter how small, they've got a case of copyright infringement. That being the case, LLM-based companies are going to suffer death by a thousand paper cuts.

            • brudgers 2 weeks ago
              If a case escalates to the SCOTUS

              For a civil case, that ain’t gonna be cheap or fast or likely.

            • AngryData 2 weeks ago
              Yeah, people don't want to admit it but 90% of US law is based on who can spend the most money on lawyers and drain their oppositions coffers first, in both civil and criminal cases.
            • umeshunni 2 weeks ago
              I think paper itself expresses that

              Page 9: There is no deterministic path from model memorization to outputs of infringing works. While we’ve used probabilistic extraction as proof of memorization, to actually extract a given piece of 50 tokens of copied text often takes hundreds or thousands of prompts. Using the adversarial extraction method of Hayes et al. [54], we’ve proven that it can be done, and therefore that there is memorization in the model [16, 27]. But this is where, even though extraction is evidence of memorization, it may become important that they are not identical processes (Section 2). Memorization is a property of the model itself; extraction comes into play when someone uses the model [27]. This paper makes claims about the former, not the latter. Nevertheless, it’s worth mentioning that it’s unlikely anyone in the real world would actually use the model in practice with this extraction method to deliberately produce infringing outputs, because doing so would require huge numbers of generations to get non-trivial amounts of text in practice

              • ai_legal_sus 2 weeks ago
                Yes perhaps deliberate extraction is impractical, but I wonder about accidental cases? One group of researchers is a drop in the bucket compared to the total number of prompts happening everyday. I would like to see a broad statistical sampling of responses matched against training data to demonstrate the true rate of occurrence. Which begs the question, what is the acceptable rate?
              • ipython 2 weeks ago
                Exactly. I feel like the AI companies are intentionally moving the goal posts- regardless of whether the resulting generated content is the same as the original, they still committed the crime of downloading and using the original copyright content in the first place!

                After all they wouldn’t have used that content unless it provided some utility over not using it…

                • pbhjpbhj 2 weeks ago
                  This ground was already covered for search engines. In USA law the answer is transformative Fair Use.

                  We don't have transformative Fair Use, nor a Fair Dealing equivalent, in the UK - I don't see anything that allows this type of behaviour?

                • dboreham 2 weeks ago
                  Agreed -- there is a kind of compression being done. But what will happen is that the law will be changed to suit whoever has the most money, probably with the excuse that "but China will beat us otherwise".
                  • charcircuit 2 weeks ago
                    The model itself is transitive, and since the output alone of a model can't be copyrighted I feel like it may not be possible to sue over the output of a model.
                    • SubiculumCode 2 weeks ago
                      Yes, if I read a book, memorize some passages, and use those memorized passages in a work without citation, it is plagiarism. I don't see how this is any different without relying on arbitrary but human-centric distinctions.
                      • ipython 2 weeks ago
                        More to the point, if you steal the book and never even read it, you are still guilty of a crime.
                      • perching_aix 2 weeks ago
                        > the additional layers of indirection sometimes produce novel output, and many times do not.

                        I think this is the key insight. It differs from something like say, JPEG (de)compression, in that it also produces novel but sensible combinations of both a number of copyrighted and non-copyrighted data, independent of their original context. In fact, I'd argue that is its main purpose. To describe it as just a lossy compressed natural-language-queryable database as a result would be reductive to its function and a mischaracterization. It can recall extended segments of its training data as demonstrated by the paper, yes, but it also cannot plagiarize the entirety of a given source data, as also described by the paper.

                        > why shouldn't LLM services trained on copyrighted material be held responsible when their product violates "substantial similarity" of said copyrighted works?

                        Because these companies and services on their own are not producing the output that is substantially similar. They (possibly) do it on user input. You could make a case that they should perform filtering and detection, but I'm not sure that's a good idea, since the user might totally have the rights to create a substantially similar work to something copyrighted, such as when they themselves own the rights or have a license to that thing. At which point, you can only hold the user themselves responsible. I guess detection on its own might be reasonable to require, in order to provide the user with the capability to not incriminate themselves, should that indeed not be their goal. This is a lot like with famous people detection and filtering, which I'm sure tech reviewers have to battle from time to time.

                        This isn't to say they shouldn't be held responsible for pirating these copyrighted bits of content in the first place though. And if they perform automated generation of substantially similar content, that would still be problematic following this logic. Not thinking of chain-of-thought here mind you, but something more silly, like writing a harness to scrape sentiment and reactively generate things based on that. Or to use, idk, weather or current time and their own prompts as the trigger.

                        Let me give you a possibly terrible example. Should Blizzard be held accountable in Germany, when users from there on the servers located on there stand in a shape of a nazi swastika ingame, and then publish screenshots and screen recordings of this on the internet? I don't think so. User action played crucial role in the reproduction of the hate symbol in question there. Conversely, LLMs aren't just spouting off whatever, they're prompted. The researchers in the paper had to put in focused efforts to perform extraction. Despite popular characterization, these are not copycat machines, and they're not just pulling out all their answers out of a magic basket cause we all ask obvious things answered before on the internet. Maybe if the aforementioned detections were added, people would finally stop coping about them this way.

                        • ai_legal_sus 2 weeks ago
                          One runs the risk of being reductive when examining a mechanisms irreducible parts.

                          User expression is a beast unto itself, but I wonder if that alone absolves the service provider? I imagine Blizzard has an extensive and mature moderation apparatus to police and discourage such behavior. There's an acceptable level of justice and accountability in place. Yet there are even more terrible real-life examples of illicit behavior outpacing moderation and overrunning platforms to the point of legal intervention and termination. Moderating user behavior is one thing, but how do you propose moderating AI expression?

                          A digression from copyright - portraying models as a "blank canvas" is itself a poor characterization, output might be triggered by a prompt, like a query against a database, but its ultimately a reflection of the contents of the training data. I think we could agree that a model trained on the worst possible data you can imagine is something we don't need in the world, no matter how well behaved your prompting is.

                          • perching_aix 2 weeks ago
                            I do not propose moderating "AI expression" - I explicitly propose otherwise, and further propose mandating that the user is provided with source attribution information, so that they can choose not to infringe, should they be at risk of doing so, and should they find that a concern (or even choose to acquire a license instead). Whether this is technologically feasible, I'm not sure, but it very much feels like to me that it should be.

                            > A digression from copyright - portraying models as a "blank canvas" is itself a poor characterization, output might be triggered by a prompt, like a query against a database, but its ultimately a reflection of the contents of the training data.

                            I'm not sure how to respond to this if at all, I think I addressed how I characterize the functionality of these models in sufficient detail. This just reads to me like an "I disagree" - and that's fine, but then that's also kinda it. Then we disagree and that's okay.

                      • Animats 2 weeks ago
                        Can they generate a list of books for which at least, say, 10% of the text can be recovered from the weights? Is this a generic problem, or is there just so much fan material around the Harry Potter books to exaggerate their importance during training?
                      • Huxley1 2 weeks ago
                        I think this is somewhat like how we memorize when we read but the model is not just rote memorization it is more like compressing and recombining content. The copyright issue is definitely complicated and I am curious how the law will adapt to these technologies in the future.
                        • suddenlybananas 2 weeks ago
                          On what basis do you say that it is like how we memorize when we read? I don't know about you, but it's extraordinarily difficult to memorise an entire book.
                          • kbelder 2 weeks ago
                            But when someone prompts you with a paragraph from a book you read, and asks you to guess the next sentence?

                            Still tough, but nowhere near as hard as memorizing a whole book. And far easier to come up with something at least plausible.

                            • Ygg2 2 weeks ago
                              It's even weirder when someone says banana and you quote the entire 1984.
                          • andy99 2 weeks ago
                            There are two legitimate points where copyright violation can occur with LLMs. (Not arguing the merits of copyright, just based on the concept as it is).

                            One is when copyrighted material is "pirated" for use in training, i.e. you torrent "the pile" instead of paying to acquire the books.

                            The other is when a user uses an LLM to generate a work that violates copyright.

                            Training itself isn't a violation, that's common sense. I am aware of lots of copyrighted things, and I could generate a work that violates copyright. My knowing this in and of itself isn't a violation.

                            The fact that an LLM agrees to help someone violate copyright is a failure mode, on par with telling them how to make meth or whatever other things their creators don't want them doing. There's a good argument for hardening them against requests to generate copyrighted content, and this already happens.

                            • TGower 2 weeks ago
                              The only way to include a book in a training dataset for LLMs without violating copyright law is to contact the rights holder and buy a license to do so. Buying an ebook license off Amazon isn't enough for this, and creating a digital copy from a physical copy for your commercial use is also against the law. A good rule of thumb is if it would be illegal for a company to distribute the digital file to empolyees for training, it's definetally illegal to train an AI the company will own on it.
                              • Ekaros 2 weeks ago
                                It is widely accepted in many jurisdictions that different types of uses have different types of copy right schemes. Especially true for video and music content. You can't just take DVD/Blueray copy of movie and show it to movie theatre in many places. Or copy a CD and play it on radio.

                                I see no reason why training AI should be treated like human reading. Especially if it is repeated. And more so if copies are illegally acquired like torrents.

                              • 201984 2 weeks ago
                                If the data of a copyrighted work is effectively embedded in the model weights, does that not make the LLM itself an illegal copy? The weights are just a binary file after all.
                                • dboreham 2 weeks ago
                                  The Dude Doctrine applies here: "That's just, like uh, your opinion, man".
                                • singleshot_ 2 weeks ago
                                  This is an interesting comment which avoids the issue: when a user uses an LLM to violate copyright, who is liable, and how would you justify your answer?
                                  • tux1968 2 weeks ago
                                    Not OP, but I would say the answer is the same as it would be if you substitute the LLM with a live human person who has memorized a section of a book and can recall it perfectly when asked.
                                    • diputsmonro 2 weeks ago
                                      It depends on where the money changes hands, IMO (which is basically what I think youre getting at). If you pay someone to perfectly recite a copywrited work (as you pay ChatGPT to do), then it would definitely be a violation.

                                      The situation is similar with image generation. An artist can draw a picture of Mickey Mouse without any issue. But if you pay an artist to draw you the same picture, that would also be a violation.

                                      With generative tools, the users are not themselves writers or artists using tools - they are effectively commissioners, commissioning custom artwork from an LLM and paying the operator.

                                      If someone built a machine that you put a quartner in, cranked a handle, and then printed out pictures of the Disney character you choose, then Disney is right in demanding them to stop (or more likely, force a license deal). Whatever technology drives the machine, whether an AI model or an image database or a mechanical turk, is largely immaterial.

                                    • jplusequalt 2 weeks ago
                                      The company who trained the LLM. They're the one's who used the copyrighted material in their training set. Claiming they were unaware is not an excuse.
                                      • fc417fc802 2 weeks ago
                                        It's an interesting conundrum. If I take an extremely large panoramic photograph and then fail to censor out small copyrighted sections of it, am I violating copyright law?

                                        It's not a perfect analogy by any means but it does serve to illustrate the difference in intent between distributing a particular work versus creating something that happens to incorporate copyrighted material verbatim but doesn't have any inherent need to or purpose in doing so.

                                      • quesera 2 weeks ago
                                        Why would the answer here be any different than when using a photocopier, brain, or other tool for the same purpose?
                                    • iamleppert 2 weeks ago
                                      The tech companies have consolidated so much power, and are so invested in AI, none of this really matters. If there is any defense, even an illogical or contrived one that can reasonably be expected to play out, expect that defense as the one to win as the final outcome in a protracted legal battle. The law at its highest levels is less about interpreting black and white rules (like many people think it is) and has more to do with the biases and motivations of those doing the interpreting.
                                      • jrm4 2 weeks ago
                                        And hopefully this puts to rest all the painfully bad, often anthropomorphizing, takes about how what the LLMs do isn't copyright infringement.

                                        It's simple. If you put the works into the LLM, it can later make immediately identifiable, if imperfect, copies of the work. If you didn't put the work in, it wouldn't be able to do that.

                                        The fact that you can't "see the copy" inside is wildly irrelevant.

                                        • flir 2 weeks ago
                                          Size of LLM: <64Gb.

                                          Size of training data: fuck knows, but The Pile alone is 880Gb. Public github's gonna be measured in Tb. A Common Crawl is about 250Tb.

                                          There's physically not enough space in there to store everything it was trained on. The vast majority of the text the chatbot was exposed to cannot be pulled out of it, as this paper makes clear.

                                          I'm guessing that the cases where great lumps of copyright text can be extracted verbatim are down to repetition in the training data? There's probably a simple fix for that.

                                          (I'm only talking about training here. The initial acquisition of the data clearly involved massive copyright infringement).

                                          • pbhjpbhj 2 weeks ago
                                            >The initial acquisition of the data clearly involved massive copyright infringement).

                                            I don't find this to be true in USA. Because Google already covered this ground and the doctrine of transformative Fair Use was born.

                                            • flir 2 weeks ago
                                              It's the download. I don't think you can download The Pile without infringing.

                                              17 U.S. Code § 106 covers reproduction, not just redistribution (IANAL).

                                              As I said, I'm separating the acquisition on the data and the training of the data, because I believe the first is an infringing act, while the other is (in the general case) not.

                                              • dragonwriter 2 weeks ago
                                                > Because Google already covered this ground and the doctrine of transformative Fair Use was born.

                                                Fair Use, and the way whether a work is transformative is a factor in it, is much older than Google; I'm not sure what specific Google precedent you think is relevant here.

                                              • jrm4 2 weeks ago
                                                You'd have a very hard time legally distinguishing this from "compressing a copyrighted work" though.
                                                • vintermann 2 weeks ago
                                                  To get out the original data from a compressed file, you just need to know the algorithm used (and for almost all formats, the file tells you).

                                                  To get out the original data from an LLM, you need to supply... the original data. Or at least, a big chunk of it.

                                                  The actually copyrightable chunk of it, arguably, since what a LLM can generate on its own is only its most predictable, unoriginal, generic chunks. Things it's seen a thousand times.

                                                  Which may turn out to be an uncomfortably high % of most creative works.

                                                  • flir 2 weeks ago
                                                    I wouldn't, because the vast majority of the copyrighted works it was trained on are not present in the model, and the model can't be persuaded to spit them out at any reasonable level of fidelity (as the paper points out).

                                                    The comparatively few that are should be fixed.

                                                    If you want to argue that the act of training is in itself infringing, even if it doesn't result in a copy... well, I'd enjoy seeing you make that argument.

                                                • perching_aix 2 weeks ago
                                                  You remind me to all the shitty times in literature class where I had to rote memorize countless works from a given author (poet), think 40, then take a test identifying which poem each of the given quotes was from. The little WinForms app I wrote to practice for these tests was one of the first programs I've ever written. I guess in that sense it's also a fond memory. I miss WinForms.

                                                  Good thing they were public (?) works, wouldn't wanna get sued [0] for possibly being a two legged copyright infringement. Or should I say having been, since naturally I immediately erased all of these works from my mind just days after these tests, even without any legal impetus.

                                                  Edit: thinking about it a bit more, you also remind me to our midterm tests from the same class. We had to produce multiple page long essays on the spot, analyzing a select work... from memory. Bonus points for being able to quote from it, of course. Needless to say, not many original thoughts were featured in those essays, not in mine, not in others' - the explicit expectation was that you'd peruse the current and older textbooks to "learn (memorize) the analysis" from, and then you'd "write about it in your own words", but still using technical terms. They were pretty much just tests in jargon use, composition, and memorization, which is definitely a choice of all time for a class on literature. But I think it draws an interesting perspective. At no point did we ever have to actually demonstrate a capability in literary analysis of our own, or was that capability ever graded, for example. But if you only read our essays, you'd think we were great at it. It was mimicry. Doesn't mean we didn't end up developing such a capability though.

                                                  [0] https://youtu.be/-JlxuQ7tPgQ

                                                  • singleshot_ 2 weeks ago
                                                    I don't think it matters much if the infringement is public, right? Given that

                                                    "Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:

                                                    (1) to reproduce the copyrighted work in copies"

                                                    • perching_aix 2 weeks ago
                                                      Public works are not protected by copyright, which is why they are public. I think you're misreading what I said.

                                                      Edit: I guess the proper term is public domain works, not just public works. Maybe that's our issue here.

                                                    • thaumasiotes 2 weeks ago
                                                      > I miss WinForms.

                                                      WinForms is still around. There have been further technologies, but as far as I can tell the current state of things is basically just a big tire fire and about the best you can do is to ignore all of them and develop in WinForms.

                                                      Is there a successor now?

                                                      • perching_aix 2 weeks ago
                                                        I miss WinForms in the sense that I don't use it anymore (and have no reason to), not in the sense that it's been deprecated. It did fall out of fashion somewhat though, as far as I'm aware it's been replaced by WPF in most places.
                                                    • karaterobot 2 weeks ago
                                                      Where in the article do the authors say this puts anything to rest? Here is their conclusion:

                                                      > Our results complicate current disputes over copyright infringement, both by rejecting easy claims made by both sides about how models work and by demonstrating that there is no single answer to the question of how much a model memorizes

                                                      I wonder if this is the sort of article that people will claim supports their side, even that it ends the debate without a knockout blow to the other side, when the actual article itself isn't making any such claim.

                                                      I'm sure you read the entire article before commenting, but I would strongly recommend everyone else does as well.

                                                      • rockemsockem 2 weeks ago
                                                        I think I big part of copyright law is whether the thing created from copyrighted material is a competitor with the original work, in addition to whether it's transformative.

                                                        LLMs are OBVIOUSLY not a replacement for the books and works that they're trained on, just like Google books isn't.

                                                        • crmd 2 weeks ago
                                                          Authors are getting busted[1] on a regular basis publishing LLM-generated or augmented novels that plausibly compete commercially with human-written books. If the LLM was trained on any books in the same genre, it seems like a clear violation.

                                                          1. For example earlier this month: https://www.reddit.com/r/fantasyromance/comments/1ktrwxj/fan...

                                                          • CaptainFever 2 weeks ago
                                                            I think what the GP meant was that it doesn't compete with that specific work it was trained on.

                                                            That is, if you want to read Harry Potter, you'd rather buy it (or get it from Anne) than try to wrangle it out of an LLM. Therefore, it doesn't compete with the original work. IANAL, though.

                                                            • rockemsockem 2 weeks ago
                                                              But that's not the LLM itself, it's an output of the LLM. The distinction matters I believe
                                                            • tossandthrow 2 weeks ago
                                                              Why not? Imagine a story teller app that is instructed in narrating a story the follows Harry Potter 1 - I would expect that there are already a ton of these apps out there.
                                                              • rockemsockem 2 weeks ago
                                                                That's not the same as the LLM itself though. That's an LLM plus specific instructions which would likely need to include a fair number of details from the books
                                                              • o11c 2 weeks ago
                                                                IANAL, but I don't think whether LLMs are successful as a replacement is very relevant.

                                                                LLMs are advertised for, and attempt to, replace the works that they're trained on. For HN users, this is most often for code-generation, but people in other fields use it for similar replacement in their own field.

                                                                • rockemsockem 2 weeks ago
                                                                  No one advertises LLMs as a replacement for literature or news and those are some of the highest profile legal cases I'm aware of. Code generation is another relatively high profile case, but in those cases I've not really seen any copyrightable code reproduced by an LLM without the prompter specifically trying to make it happen.
                                                                • vintermann 2 weeks ago
                                                                  It should be in a sane world, but the courts are not a sane world, or even a consistent world.
                                                                • orionsbelt 2 weeks ago
                                                                  So can humans? I can ask a human to draw Mickey Mouse or Superman, and they can! Or recite a poem. Some humans have much better memories and can do this with a far greater degree of fidelity too, just like an LLM vs an average human.

                                                                  If you ask OpenAI to generate an image of your dog as Superman, it will often start to do so, and then it will realize it is copyrighted, and stop. This seems sensible to me.

                                                                  Isn’t it the ultimate creative result that is copyright infringement, and not merely that a model was trained to understand something very well?

                                                                  • jrm4 2 weeks ago
                                                                    Copyright infringement is the act of creating/using an copy in an unauthorized way.

                                                                    Remember, we can only target humans. So we're not likely to target your guy; but we ARE likely to target "the guy that definitely fed a complete unauthorized copy of the thing into the LLM."

                                                                    • regularfry 2 weeks ago
                                                                      I just don't get the legal theory here.

                                                                      If I download harry_potter_goblet_fire.txt off some dodgy site, then let's assume that owner of that site has infringed copyright by distributing it. If I upload it again to some other dodgy site, I would also infringe copyright in a similar same way. But that would be naughty so I'm not going to do that.

                                                                      Let's say instead that I feed it into a bunch of janky pytorch scripts with a bunch of other text files, and out pops a bunch of weights. Twice.

                                                                      The first model I build is a classifier. Its output is binary: is this text about wizards, yes/no.

                                                                      The second model I build is an LLM. Its output is text, and (as in the article) you can get imperfect reproductions of parts of the training file out of it with the right prompts.

                                                                      Now, I upload both those sets of weights to HuggingFace.

                                                                      How many times am I supposed to have infringed copyright?

                                                                      Is it:

                                                                      A) Twice (at least), because the act of doing anything whatsoever with harry_potter_goblet_fire.txt without permission is verboten;

                                                                      B) Once, because only one of the models is capable of reproducing the original (even if only approximately);

                                                                      C) Zero, because neither model is capable of a reproduction that would compete with the original;

                                                                      or

                                                                      D) Zero, because I'm not the distributor of the file, and merely processing it - "format shifted" from the book, if you like - is not problematic in itself.

                                                                      Logically I can see justifications for any of B) (tenuously), C), or D). Obviously publishers would want us to think that A) is right, but based on what? I see a lot of moral outrage, but very little actual argument. That makes me think there's nothing there.

                                                                    • bandrami 2 weeks ago
                                                                      Public performance of a written work has different rules than reproducing text or images.

                                                                      But me drawing Superman would absolutely be violating DC's copyright. They probably wouldn't care since my drawing would suck, but that's not the legal issue.

                                                                      • MengerSponge 2 weeks ago
                                                                        [flagged]
                                                                        • nick__m 2 weeks ago
                                                                          Powerful tools that would not exist otherwise !
                                                                      • hnthrowaway_989 2 weeks ago
                                                                        The copyright angle is a lose lose situation. If copyrigthtists win, the outcome is even more restricted definition of "fair use" that probably is going to kill a lot of art.

                                                                        I don't have an alternative argument against AI art either, but I don't think you are going to like this outcome.

                                                                        • cortesoft 2 weeks ago
                                                                          Just because an LLM has the ability to infringe copyright doesn’t mean everything it does infringes copyright.
                                                                          • fluidcruft 2 weeks ago
                                                                            If it contains the copyrighted material, copyright laws apply. Being able to produce the content demonstrates pretty conclusively that it contains the copyrighted material.

                                                                            The only real question is whether it's possible to prevent the system from generating the copyrighted content.

                                                                            A strange analogy would be some sort of magical BluRay that plays novel movies unless you enter the decryption key. And somehow you would have to prevent entering those keys.

                                                                            • echelon 2 weeks ago
                                                                              > If it contains the copyrighted material, copyright laws apply.

                                                                              Not so fast! That hasn't been tested in court or given any sort of recommendation by any of the relevant IP bodies.

                                                                              And to play devil's advocate here: your brain also contains an enormous amount of copyrighted content. I'm glad the lawyers aren't lobotomizing us and demanding licensing fees on our memories.

                                                                              I'm pretty sure if you asked me to sit under an MRI and recall scenes from movies like "Jurassic Park", my visual cortex would reconstruct scenes with some amount of fidelity to the original. I shouldn't owe that to Universal. (In a perfect world, they would owe me for imprinting and storing their memetic information in my mind. Ad companies and brands should for sure.)

                                                                              If I say, "One small step for man", I'm pretty confident that the lot of you would remember Armstrong's exact voice, the exact noise profile of the recording, with the precise equipment beeps. With almost exacting recall.

                                                                              I'm also pretty sure your brains can remember a ginormous amount of music. I know I can. And when I'm given a prediction task (eg. listening to a song I already know), I absolutely know what's coming before it hits.

                                                                            • o11c 2 weeks ago
                                                                              Napster was found to have secondary liability for what their users did, and they didn't even feed the copyrighted inputs directly.

                                                                              AI companies are adding the copyrighted material themselves, so they should have even more liability for what their users do (even ignoring the advertising they do).

                                                                            • jonplackett 2 weeks ago
                                                                              I don’t think it’s that simple.

                                                                              Last I remember, whether it is ‘transformative’ is what’s important.

                                                                              https://en.m.wikipedia.org/wiki/Transformative_use

                                                                              Eg. Google got away with ‘transforming’ books with Google books.

                                                                              https://www.sgrlaw.com/google-books-fair-transformative-use/

                                                                              • morkalork 2 weeks ago
                                                                                If that argument worked, anyone could use compression algorithms as a loophole for copyright infringement.
                                                                                • dylan604 2 weeks ago
                                                                                  The problem is if someone uses a prompt that is clearly Potter-esque, there have been examples of it returning Potter exactly. If it had never had Potter put into it, it would not be able to do that.

                                                                                  I think the exact examples used in the past were Indiana Jones, but the point is the same.

                                                                                  • anothernewdude 2 weeks ago
                                                                                    One day they'll use these cases to sue people who have read books, because they can make immediately identifiable if imperfect copies of the works.
                                                                                    • tossandthrow 2 weeks ago
                                                                                      What? Can we copy you brain in the billions and load it into a phone like a commodity?
                                                                                      • CaptainFever 2 weeks ago
                                                                                        What does that matter? (Also: you never know, that might be possible some day.) You can still infringe copyright without a computer or printing press; just write out a book that you remember and distribute it.
                                                                                    • bdbenton5255 2 weeks ago
                                                                                      Suchir Balaji did not die in vain.
                                                                                    • landl0rd 2 weeks ago
                                                                                      Important note: they likely “memorize” Harry Potter and 1984 almost completely because they don’t. No coincidence that some of the most popular books, often quoted, are “memorized”. It’s likely what they’re actually memorizing are fair use quotes from the books, at least mostly, making these some of the more represented in the training set.
                                                                                      • dboreham 2 weeks ago
                                                                                        If I take a very large set of fair use quotes from a book, that I find on the internet, and stitch them together to make a "1984 of Theseus", and make that downloadable for a fee, am I not infringing copyright?
                                                                                        • landl0rd 2 weeks ago
                                                                                          The user has to do “final assembly”, giving it an appropriate input to produce the final stitched-together result.
                                                                                      • billionairebro 2 weeks ago
                                                                                        [flagged]
                                                                                        • w10-1 2 weeks ago
                                                                                          This approach is misguided, as are most applications of copyright to AI.

                                                                                          Copyright violations are a form of stealing, like conversion or misappropriation, where limited rights granted are later expanded.

                                                                                          The "substantial similarity" test is just a way courts have evolved to see if there was copying, and if it was important -- in the context of human beings. But because it doesn't really matter if people make personal copies, and because you have to quote something to criticize it, and because some art is like other art -- because that level of stealing is normal -- copyright built a bunch of exceptions.

                                                                                          But imho there is no doubt that though a book grants the right to read for the sake of enjoyment, the right to process the text for recall or replication by automated means is not included in any sale of any copy -- regardless of whether one can trigger output that meets a substantial-similarity test.

                                                                                          "All Rights Reserved"

                                                                                          I understand case law and statutes state nothing like this, and that prior law does more to obscure than clarify the issue. But that's the take from first principles.