Did Schnorr destroy RSA? Show me the factors
287 points by sweis 4 years ago | 207 comments- tyingq 4 years agoIsn't it typical to release the paper first, for peer vetting, ahead of any actual working proof?
It seems like the only reason for the "put up or shut up" reactions is that "destroys RSA" comment in the submitted abstract...which isn't in the actual paper.
- dragontamer 4 years agoThe sum-of-three cubes announcement was tweeted pretty easily.
https://twitter.com/robinhouston/status/1169877007045296128
Its easier to drum up support for your paper when you have a quick way to prove to the community of mathematicians that your results are golden.
EDIT: The original webpage: http://math.mit.edu/~drew/sumsofcubes.html
As you can see, the sum-of-cubes announcements are very terse. Ultimately pointing to the following link: https://share.cocalc.com/share/900eec7e-0710-4e2f-a03a-dba01...
That kind of website / tweet is a "drop the mic" moment. It really makes people pay attention.
- robinhouston 4 years agoSo this is why I’ve suddenly been getting new interaction on that old tweet! I’m glad I saw this comment, because I was quite confused.
- coliveira 4 years agoThat's not how science works. Yes, if your algorithm is simple enough and you can create an implementation, then it is good that you produce a working version. But it may be more complicated to implement the algorithm than writing a paper. This doesn't mean that the implementation is impossible.
- pvg 4 years agoThat's exactly how maths works, specifically in the cases where where the claim is easily backed by a demonstration. A famous example:
https://en.wikipedia.org/wiki/Frank_Nelson_Cole
If you 'destroyed RSA' through better factorization, all you have to do is start publishing factors of RSA challenge numbers.
Matthew Green has a fun thread about other ways to approach this along with an interesting "real talk about factoring" sidebar by Nadia Heninger:
https://twitter.com/matthew_d_green/status/13669500931784990...
- pdonis 4 years ago> That's not how science works.
This isn't science, it's math. As the article mentions, there is an 862-bit RSA challenge that hasn't been factored yet. Factoring it should be possible on commodity hardware if the claims in the paper are true. So why not just do it? The test of success is simple: either you win the challenge or you don't.
- raverbashing 4 years ago> But it may be more complicated to implement the algorithm than writing a paper
Then why would I trust it? You don't need to write code, you need to write an example
As Linus Torvalds says: talk is cheap, show me the code
Academia is full of "paper scientists" that put out papers but produce nothing of value.
They are also full of postgraduate students as well that would be more than willing to work together and put a proof-of-concept code with the paper.
- pvg 4 years ago
- robinhouston 4 years ago
- dan-robertson 4 years agoIf you really think your paper destroys RSA, I think there are ethical questions the authors must decide before publishing it.
In particular, I think the right process would be:
1. Give some brief description of the result (eg factoring numbers in O(...)), and some proof (eg a factorisation of the next rsa semiprime, possibly more) that convinces people that your claims are true
2. Wait a while for people to have the chance to not be burned
3. Publish the paper
Instead, the authors seem to be going for:
1. Publish the paper with a provocative abstract.
2. Wait to see who implements the algorithm first.
It doesn’t seem the best idea to me, but what do I know?
- goalieca 4 years agoUnlike a zero day, there still remain a number of important factors (haha) to actually break a large number. But crypto systems are special because they rely on trust. The mere sign of weakness is sufficient to kill that trust. A sufficiently resourced state actor may even be ahead.. we don’t know
- vbezhenar 4 years agoYou're going to be killed or kidnapped between steps 1 and 3.
- dcow 4 years agoThe pursuit of knowledge should not be subject to any annoying effects said knowledge may have on people hedged against such knowledge becoming available. That’s an anti-liberal recipe for a rather dark society. I don't see the point in tone policing people generating knowledge just to remind them that sometimes knowledge is inconvenient.
- jrochkind1 4 years ago"tone policing"? What do you think that phrase means? Who was talking about "tone"?
Do you believe in "responsible disclosure" [1] of security vulnerabilities? How does your stated philosophy apply or not apply to ethics around disclosure of discovered software security vulnerabilities? Is that different?
[1] https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...
- sigzero 4 years agoSorry that is such a bullshit statement.
- ChrisLomont 4 years ago>I don't see the point in tone policing people generating knowledge just to remind them that sometimes knowledge is inconvenient.
If you found a simple way to kill all of mankind, that could be mitigated by waiting a week to publish while safeguards were implemented, is it wiser to publish immediately and risk someone killing all of mankind or to notify proper groups and then publish later after it won't kill everyone?
Maybe there's some nuance in these things. Ignoring effects of knowledge is not wise.
- jrochkind1 4 years ago
- goalieca 4 years ago
- andrewla 4 years agoFor something of this magnitude, I think the expected behavior would be to delay the release of the paper until the ecosystem had time to adapt. To prove the paper is valid (and to assert precedence) you would offer to factor several "nothing up my sleeve" numbers -- like sha512("schnorr1") + sha512("schnorr2").
As it is, if the algorithm presented is valid then this potentially compromises currently operating systems.
- dcow 4 years agoI do not agree that so-called “responsible disclosure” would or should be the expected behavior. I do understand how someone accustomed to corporate bug bounties and private security mailing lists may think so, though. Full disclosure is a perfectly reasonable strategy especially when we’re operating in the academic realm. Industry always takes years to catch up to academia anyway.
- nine_k 4 years agoIf this paper allows to produce a working RSA cracker in a month, much of high-value IT infrastructure is under imminent threat.
Yes, you can replace your SSH keys with elliptic ones, and maybe adjust your TLS accepted algorithms. Even this is not always easy or cheap.
But other things that may rely on RSA (or triple RSA) may have trouble upgrading fast, and upgrading them at all is going to cost a lot.
- nine_k 4 years ago
- ajarmst 4 years agoNote that (despite the ill-considered claim about RSA in the abstract) this isn't security research. It's a maths paper in a very important area of number theory. It doesn't disclose a patchable flaw in RSA---it claims that RSA is based on an unproven conjecture that turns out to be false. That RSA is vulnerable to fast prime factoring has been known since it was first implemented. That's not fixable, so delay serves little purpose, especially with a (if it turns out to be true) seminal result in number theory.
- coliveira 4 years agoIf you find something that can break all cryptography in the world, then I think your best option (even for your personal security) is to release everything publicly.
- pacman2 4 years agoA one time pad can not be broken.
- 4 years ago
- pacman2 4 years ago
- toast0 4 years agoI haven't read the paper or anything, but if the expected adaptation is to drop support for RSA; no reasonable amount of time will make it a seamless transition.
There are so many devices and pieces of software that are stuck on RSA, a headsup of say 5 years would still result in a clossal mess; may as well have the mess now.
- 4 years ago
- GordonS 4 years agoRSA has published the same set of "come and break me" keys for years, decades I think. Breaking those public (in both senses!) keys would be a good start.
- dcow 4 years ago
- Klwohu 4 years agoYou probably understand how serious this is, so many people are going to become very emotional as this strikes at the very foundations of the Internet and digital security as we know it. The reactions I've seen so far do seem very emotional and this will only become much, much worse if there's a PoC which is produced.
- not2b 4 years agoWhat does emotion have to do with it? For many years, every time someone claims to have broken RSA the response is the same: here is a test case you (the alleged RSA breaker) should be able to crack if your claims are correct. Put up or shut up. That seems like a fair, just-the-facts response.
- ajarmst 4 years agoInternet and digital security? This strikes at the very foundations of number theory.
- Jweb_Guru 4 years agoEhhhhhhh. I'm not sure I'm aware of anything in number theory that would substantially change if a fast factoring algorithm were discovered, outside of its applications to cryptography. It'd be very likely that a fast factoring algorithm exposed some unusual structure about the natural numbers, I guess?
- Jweb_Guru 4 years ago
- not2b 4 years ago
- chrisseaton 4 years ago> is that "destroys RSA" comment in the submitted abstract...which isn't in the actual paper
I think it is - https://eprint.iacr.org/2021/232.pdf
- tyingq 4 years agoAh, I see. It's been removed in a newer revision of the paper. https://www.math.uni-frankfurt.de/~dmst/teaching/WS2019/SVP9...
- dTP90pN 4 years agoThe newer 12-page version on the preprint server has a PDF creation date of 3/3/2021, 11:32:56 AM, created with pdfeTeX-1.30.4.
https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...
The previous (reportedly wrongly uploaded) version is from 12/5/2019, 9:10:13 AM created with pdfeTeX-1.30.4.
https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...
The university website version is from 3/5/2020, 12:00:19 PM created with pdfTeX-1.40.15.
These dates & times are MM/DD/YYYY & CET.
A co-editor of the Cryptology ePrint Archive confirmed the submission on twitter:
- bhaney 4 years agoOther way around I think. Your link is an old version of the paper. The one on eprint was just updated today with a version of the paper that adds the "destroys RSA" line and removes the "work in progress" line (put it in the wayback machine to see the version that was there yesterday without the claim of destroying RSA)
- m4lvin 4 years agoThat times out for me, I guess www.math.uni-frankfurt.de is now getting more attention than usual ;-)
Here is a version in the Google cache, it has an old date on it "work in progress 04.03.2020" and does not contain the "destroys RSA" sentence: https://webcache.googleusercontent.com/search?q=cache:E0L-S3...
- 4 years ago
- dTP90pN 4 years ago
- tyingq 4 years ago
- jasonmp85 4 years agoThis is math. The working proof _is_ the paper. It just takes a long time to refute if there’s a subtle error. “Putting up” ISN’T proof but it will sure get a lot of important people to drop what they’re doing and check the paper faster.
- roywiggins 4 years agoI dunno, isn't a demonstration of a few factors proof that someone somewhere has broken RSA, or has a machine much more powerful than we'd expect to exist? It's not proof that any of the math in the paper is correct, but it doesn't have to be. Even if there's a subtle error in the proof-as-written it doesn't matter if the implementation actually works.
- nl 4 years agoAll that is true.
But if the author of the paper (or even someone else with a credible reputation) was to public factors and say they used this method then it's useful evidence.
- nl 4 years ago
- ajarmst 4 years agoI'd agree with that if he had merely tweeted about RSA. However, he put the claim to "destroy RSA" in the abstract. Claims made in the abstract do need to be substantiated, but RSA isn't discussed in the main text at all.
- sellyme 4 years ago"Putting up" isn't proof that an RSA decryption algorithm exists, but it is proof that someone has found a way to reverse RSA either way.
The latter is much more important.
- rocqua 4 years agoThis is math claiming an algorithm that is faster.
For an algorithm, asking for results is not that weird. It certainly fits within the purview of a paper. Moreover, it would be really strong evidence for the claims made in the paper. It is much easier to check than the proofs.
- KMag 4 years agoI haven't looked into the details of the speed improvements... it's often the case that big-O improvements in complex problems come with bigger constant factors. It's entirely reasonable that this new algorithm is slower for factoring 384-bit numbers but faster for 4096-bit numbers.
I wouldn't be surprised if a demonstration that pushes the current publicly known record for largest factored RSA modulus costs hundreds of thousands or millions of dollars even with this new algorithm, and the algorithm is also slower than other methods for, say 384-bit RSA.
It may be difficult even for Peter Schnorr to get that kind of budget for a demonstration before his paper gets traction.
- roywiggins 4 years ago
- croddin 4 years agoHow broken does this claim RSA is? SHA-1 was known to be broken for a long time before actually pulling off a collision was performed for example.[1]
- lowercase1 4 years agoAn attack is anything that makes it take less than a brute force effort (of 2^80 operations). A 2^63 effort is really expensive in 2007. But by 2015 computation was maybe 2^5 times cheaper and there was a 2^5 cheaper attack.
I believe that this breakthrough could be quite a bit bigger because it's changing the costs from exponential to polynomial and so speedup is likely a much bigger change.
- lowercase1 4 years ago
- anonisko 4 years agoThe whole paper is linked, https://eprint.iacr.org/2021/232.pdf
- DennisP 4 years agoAnd the article says it would take around a thousand CPU core years to break 800-bit RSA. Maybe Schnorr needs to publish first to get the budget for that.
- pdonis 4 years ago> the article says it would take around a thousand CPU core years to break 800-bit RSA
That's for previous algorithms, not the one described in the paper.
- ajarmst 4 years agoHe doesn't need to demonstrate it with an 800-bit number. Given the claims made about how much faster this is, the effect should show up even in fairly small numbers.
- ajarmst 4 years ago
- martamorena2843 4 years agoThat can be solved within hours on a GPU cluster... Looking at a single core is usually not a great idea when you have billions of cores available.
- salawat 4 years agoOnly if your algorithm is conducive to parallelism based speedup. Amdahl's law still applies.
The paper mentions some gains being possible through parallelism with one of the algorithms their work is based on, but also mentioned most prior art is not effectively parallelizable across discrete machines.
- salawat 4 years ago
- pdonis 4 years ago
- itcrowd 4 years agoThe factors could be included in the manuscript as an example..
- baby 4 years agoIt is in the actual paper.
Also, without that sentence nobody would be trying to read the paper, so props to Schnorr who understands how to create the buzz :P
- ajarmst 4 years agoYep. A paper has to substantiate any claims made in the abstract, and this failure already undermines my confidence that the result is correct.
- dragontamer 4 years ago
- paob 4 years agoHere we have Léo Ducas testing Schnorr's new method in Sage: https://github.com/lducas/SchnorrGate
Apparently, "[t]his suggest that the approach may be sensible, but that not all short vectors give rise to factoring relations, and that obtaining a sufficient success rate requires much larger lattice dimension than claimed in [Sch21]."
- ianbooker 4 years agoCP Schnorr is emeritus professor from Frankfurt university. He is respected for his work in cryptography. He has, pun intended, nothing to prove but still works and furthers research.
Yes, claiming that "this breaks RSA" is bold, but this implementation shows that there is some advance in doing so in the paper.
Therefore signaling that this is a "scandal" via the postfix "gate" seems just inappropriate.
Apart from that kudos for the implementation to Ducas!
Calling it the "Schnorr attack" would imply that the outcome of it is still uncertain. And it also would sound way cooler ;)
- paob 4 years agoI recommend you contact Ducas to tell him about your concerns directly. I do not know him personally as I first heard about this from his public Twitter account: https://twitter.com/DucasLeo
Just to make sure you get Ducas's main argument, I quote him here again: "Personal study (unfortunately, never written down cleanly) of this approach suggested me that this approach requires solving SVP in dimensions beyond reasonable, leading to a factorization algorithm much slower than the state of the art. My impression is that this is the consensus among experts having spent some time on it as well."
So it seems like the conclusion is clear-cut contrary to what you were suggesting.
Also wouldn't the name "Schnorr attack" lead to people thinking of attacks on Schnorr signatures instead?
- ianbooker 4 years agoGood point on the signatures.
- ianbooker 4 years ago
- paob 4 years ago
- ianbooker 4 years ago
- NoKnowledge 4 years agoThis take is rather naive. Those RSA factoring records were done by a large international team of researchers, using well established algorithms and decades of work on implementing those methods as fast as possible.
The blog post says the paper mentions 8.4e10 operations for factoring, but I can't find that number in the paper anywhere. The post then states: "The 800-bit claims would be 36 bits of work." I don't know what that means.
[edit]: the numbers are in the new version (https://eprint.iacr.org/2021/232). I was looking at the old version uploaded yesterday.
- titanomachy 4 years ago> The post states that 800-bit claims would be 36 bits of work. I don't know what that means.
From the article: "Schnorr’s paper claims to factor ... 800-bit moduli in 8.4·10¹⁰ operations"
2^36 ~= 8.4·10¹⁰, so I guess "36 bits of work" means 2^36 operations. Analogous to how a password with 36 bits of entropy would require 2^36 guesses. My first time encountering the phrase "bits of work" as well, though.
- contravariant 4 years agoIt's in the abstract:
>Our accelerated strongprimal-dual reduction of [GN08] factors integers N≈2^400 and N≈2^800 by 4.2·10^9 and 8.4·10^10 arithmetic operations.
- AnimalMuppet 4 years agoIncreasing the length by a factor of 2^400 only increased the amount of work by a factor of 20? Staggering, if true in general.
- ajarmst 4 years agoYeah, that's what got me. Doubling the length of the key only requires a single order of magnitude more work?. If that turns out to be true, I'm going to need to revise my beliefs about how the universe works. In particular, information theory and thermodynamics, because multiplying two numbers together doesn't preserve information about what the factors were. Or at least pretty much everyone thought so. (Caveat: if the values of primes turn out to have a predictable pattern, that could provide the needed information. However, that would mean that the Riemann Hypothesis is false, and that'd be an even more astounding result.)
- vitus 4 years agoActually, you're only increasing the length of the number by a factor of 2, since 2^400 is a 400-bit number.
If true, it's still leaps and bounds ahead of anything we have today, though.
- ajarmst 4 years ago
- AnimalMuppet 4 years ago
- tgsovlerkhgsel 4 years ago2^36 "operations" can still take a lot of time if each operation is multiplying two giant numbers, unless the meaning of "operation" is somehow normalized to mean e.g. 64 bit integer operations.
- TacticalCoder 4 years ago> 2^36 "operations" can still take a lot of time if each operation is multiplying two giant number
It took me 3.3 years of actual computation time to do about 2^46 multiplication+modulo of two 2048 bit numbers on a Core i7. 2^36 of 2048 bit numbers should be doable in a day on an eight years old CPU.
P.S: that was on a single core, for the problem I solved was explicitly created as to not be parallelizable.
- UncleOxidant 4 years agoIt didn't take long for custom ASICs for mining bitcoin to emerge. It wouldn't take long for custom ASICs to do these kinds of operations a lot faster than on a general purpose CPU to emerge.
- TacticalCoder 4 years ago
- libeclipse 4 years agoSupposing the paper does describe a more efficient factorisation algorithm, that does not imply that factoring a 800 bit prime (like the author of this article suggests) would be cheap.
- gowld 4 years agofactoring a 800 bit prime is definitely cheap. ;-)
- core-questions 4 years agoIn fact, I'll do it right now for you for only one dollar per bit.
- core-questions 4 years ago
- gowld 4 years ago
- titanomachy 4 years ago
- chrisco255 4 years agoFor those of us less familiar with cryptography and RSA in general: what are the implications if RSA is broken? What are the mitigations that would need to occur in its place?
- tgsovlerkhgsel 4 years agoIf RSA-2048 is practically broken or breakable:
The public web and code signing PKIs collapse overnight. Most certificate authorities use RSA-2048 either for the roots or intermediates. The HN site not only uses a RSA-2048 key in its own certificate, the CA issuing that certificate and the root CA issuing the intermediate also do.
All data transmitted without forward secrecy on most web sites is compromised. Most websites nowadays use forward secrecy and/or ECDSA, but data sent years ago may still be of value (e.g. passwords) and become decryptable now.
Any data (e.g. backups, past e-mails) encrypted using RSA keys is at risk.
Any authentication system relying on RSA keys has a problem. This can include systems like smartcards or HSMs that are hard to update, software or firmware updates, etc. Banking too.
Edit to add - if RSA-1024 is practically breakable but RSA-2048 is not: some systems that relied on RSA-1024 have a problem. These should be rare, but sometimes legacy doesn't get updated until it becomes an absolute emergency. Everyone realizes that RSA-2048 is only a matter of time, that time is running out quicker than expected, and starts upgrading to ECDSA with more urgency. This will likely take a long time due to legacy hardware.
- bigiain 4 years agoSurely I'm not the only one who read this and thought "I wonder how long the NSA have known this result, and how much better their internal attacks are than public academic results? I wonder how much of their 'full take' internet backbone archive has been decrypted and keyword mined?"
- secfirstmd 4 years agoThere was a quote in a newspaper I unfortunately forget the location of about four years ago about a massive break through in encryption by the NSA post Snowden. Enough subtle hints about it. My working assumption had been it was RSA related. I noticed for example some interesting organisations changed their guidelines about its usage in past three years or so.
- secfirstmd 4 years ago
- upofadown 4 years ago>All data transmitted without forward secrecy on most web sites is compromised.
Forward secrecy does not protect against broken cryptography, so this is more about what methods were used and how much an new technique like this affects them.
- lokedhs 4 years agoTrue, but it does protect against broken RSA. Because RSA is a not used to encrypt the actual data. That's probably using AES.
- lokedhs 4 years ago
- the8472 4 years agoAlso any proprietary/ancient SSH implementation only supporting RSA that you'll find in all kinds of boxes.
- mekster 4 years agoAre alternatives already available that needs to be swapped to be durable again?
- bigiain 4 years ago
- anonisko 4 years ago"Broken" generally isn't a binary event in cryptography.
It's a continuum from "impossible to do with all the time and energy of the universe and the most advanced computers we have" to "my commodity hardware can crack it in a few minutes".
The same goes for fears of quantum computing breaking current cryptography. It goes from effectively impossible to "yeah, we could break it with a few years of constant computation, which is plenty of time to switch to quantum resistant schemes".
- bawolff 4 years agoWell that's generally true, sometimes breakthroughs do happen overnight. Its not impossible.
- anonisko 4 years agoYup. That's why I say generally.
Even if the paper is correct it seems to fall into the 'moving down the continuum' category.
- anonisko 4 years ago
- jMyles 4 years ago> "Broken" generally isn't a binary event in cryptography.
If there were, for example, a way to glean a private key without factoring the modulus, I think we'd all agree that this amounts to "breaking" the system insofar as that it changes the applicability of the hardness assumption.
On the other hand, simply achieving a faster way to factor the modulus is, at best, part of a continuum as you say.
- mekster 4 years ago> which is plenty of time to switch to quantum resistant schemes
That's not how you treat broken cryptography. If your data is already collected and stored encrypted by a third party which still holds value after several years, you're already in bad shape.
- bawolff 4 years ago
- kadoban 4 years agoDepending on exactly how broken, a large portion of traffic on the internet would lose almost all of its security guarantees (privacy, resistance to forgery). It would also be a huge effort, and take forever to get everyone to fix it. That's about as close to "movie disaster" level as it gets.
- aaomidi 4 years ago1. We kinda knew RSA has an expiration date due to quantum computers. Assuming the paper is true, this just brought the expiration date far closer to us.
2. Major issue is going to be webpki and replaying govt captured encrypted communications.
3. There are a lot of abandoned servers out there that use RSA. There is a lot of code signing that uses RSA. There is just a lot of identity proven on the web that uses RSA to prove the identity. It's going to be a clusterfuck of identity. Again, assuming the paper means RSA is just completely broken.
- dataflow 4 years ago> 1. We kinda knew RSA has an expiration date due to quantum computers.
Only if you somehow "know" quantum computing is ever going to be practically realized. It may never be.
- freeone3000 4 years agoThere's no real big theoretical problems in the quantum computer building space. There's problems of scale, and funding, and usual growing pains of a new industry, but scale went from 7 to 24 fairly quickly and all it took was more money. If I gave IBM $10T dollars, they could build me a 1024-qbit computer. Once it gets cheaper, which is the current problem, I don't see any reason why Azure Quantum (ex) wouldn't simply decrease in price to where it can be used practically.
- 4 years ago
- akvadrako 4 years agoThis is true - in general each bit you add to a quantum computer means the noise floor needs to be twice as low, so difficulty scales exponentially. That is unless a threshold is reached where error correction scales faster.
- freeone3000 4 years ago
- chrisco255 4 years agoDoes this technique for factorization by Schnorr have any implications for any other cryptographic methods as well (if confirmed)?
- dataflow 4 years ago
- sillysaurusx 4 years agoThere are lots of alternative constructions. ECC, for example.
1024-bit and higher RSA is still unfactorable, so I don't think anyone will be attacking RSA directly any time soon.
- akvadrako 4 years agoECC is considered even less quantum resistant than RSA because the key lengths are so short.
- DennisP 4 years agoBut for now, it's more important to ask whether ECC is vulnerable to some variant of Schnorr's attack, which uses conventional computers. We already had an algorithm to break RSA on quantum.
- DennisP 4 years ago
- akvadrako 4 years ago
- Accujack 4 years agoWell, the most obvious problem for the average lay person is that the movie "Sneakers" would suddenly gain relevance again, and most of the cast are too old (or too dead) to make a sequel.
Also, that whole thing about lots of computer encryption tech suddenly being effectively insecure.
- rurban 4 years agoHe didn't claim it is broken. Only that it can be broken 2x faster than before. RSA 4096 as recommended by the FSF is still secure. RSA 2048 might be breakable by the NSA. But so far we are at 800-1000 at risk.
- TacticalCoder 4 years ago> He didn't claim it is broken.
But then there's that line: "This destroys the RSA cryptosystem" in the abstract of the paper.
- rurban 4 years agoYes, because RSA 2048 is broken, not RSA 4096. Almost everything uses 2k, nobody uses 4k. The cryptosystem infrastructure would be broken. Not talking about the possible backdoors in the ECC NIST curves. Thanksfully we have the djb curves there.
- rurban 4 years ago
- TacticalCoder 4 years ago
- bawolff 4 years agoThis would massively break basically all traditional public key crypto i think (depends a bit on if it extends to eliptic-curve or just integer based RSA [edit: meant to say whether the algorithm can be adapted to solving discrete logrithms over eliptic curves]). It would be the biggest crypto thing to happen in the last 30 years at least.
The mitigation would be to move to experimental post-quantum crypto systems immediately (quantum computers have all the fuss because they can break rsa).
This is basically an unbelievable result. Without actually providing some factored numbers i am very doubtful.
[I have not read paper]
Edit: as pointed out below, i may have gotten overexcited. Still an incredible result if true.
- teraflop 4 years agoThere is no such thing as "elliptic-curve RSA".
- jMyles 4 years ago> This would massively break basically all traditional public key crypto i think (depends a bit on if it extends to eliptic-curve or just integer based RSA).
"A bit"? A lot more than a bit. A world.
And on the surface, since it appears to be a factoring system, rather than a general purpose discrete log solver, the consequences, while incredible, are far more limited than the picture you paint. If this is even true; a matter over which I'm skeptical.
- SuchAnonMuchWow 4 years agoAs stated in your comment, eliptic curve cryptography relies on discrete logarithm but this algorithm is a method for factoring integers, with ideas similar to the Quadratic Sieve algorithm (https://en.wikipedia.org/wiki/Quadratic_sieve).
It does not extend to breaking eliptic-curve cryptography, for the same reason that the Quadratic Sieve does not extend to eliptic-curve crypto: the underling math problem is different (factorisation vs discrete logarithm).
- teraflop 4 years ago
- tgsovlerkhgsel 4 years ago
- abetusk 4 years agoOK, here is a brief overview for people:
To factor a number N (assumed to essentially be the product of two very large primes), find a 'short' lattice vector [0] using LLL [1] (and BKZ reduction? [2]) that finds many relations of the form:
where p,q are small primes.(u_i) = p_i,0 * p_{i,1} * ... * p_{i,n-1} (u_i - v_i * N) = q_{i,0} * q_{i,1} * ... * q_{i,n-1}
Numbers that have all their factors less than some prime, B, are said to be "B-smooth". In the above, both (u_i) and (u_i - v_i * N) are p_{i,n-1}-smooth and q_{i,n-1}-smooth, respectively.
Construct many u_i and (u_i - v_i * N), so much so that you can create a product of primes, r_i, of the form:
where each b_i are integers.r_0^{2 b_0} * r_1^{2 b_1} * ... * r_{n-1}^{2 b_{n-1}} = 1 mod N
Since all exponents (2b_i) are even, we have the potential to find the square root of 1 which has the potential to resolve to two different numbers since N is composite. One of those is the product of r_i^{b_i} and the other is -1. Since y^2 = 1 mod N, we get (y-1)(y+1) = 0 mod N. If (y-1) or (y+1) are not 0, then then must share a factor of N and we've successfully factored.
The trick is, of course, finding the smooth numbers. To do this, a lattice basis is made such that you find a short integer relation of the form
where ~= means "approximately equal to".a_0 ln(p_0) + a_1 ln(p_1) + ... + a_{n-1} ln(p_{n-1}) ~= ln(N)
u is chosen as the product of primes of all a_i > 0 and v is chosen to be the product of all primes where a_i < 0. The hope is that (u - v*N) is also p_{n-1}-smooth, which, as far as I understand, most of the math in the paper is trying to justify.
The main innovation here, as far as I can tell, is that Schnorr is fiddling with the 'weighting' of the main diagonal when constructing the lattice basis. I interpret this as basically trying to randomize the initial lattice basis so that the chances of getting a different integer relation (for eventual construction of u,v) is more probable.
I've been confused about this for over a decade as variants of this algorithm, and Schnorr's work in general, have been well published. For example, there's a paper from 2010 on "A Note on Integer Factorization Using Lattices" by Antonio Vera which discusses Schnorr's [3] construction.
Is Schnorr trying to shout louder so people will listen or is there something else fundamentally flawed with this type of algorithm?
Just a word of warning, LLL solves polynomial factorization in polynomial time (given a polynomial with integer coefficients, find it's factor polynomials also with integer coefficients) [4] and has been used to break other (now very old) cryptosystems [5]. If there's a candidate algorithm to solve integer factoring, lattice reduction (LLL, PSLQ, etc.) are it.
I know of fplll that's a stand alone (FOSS) implementation of LLL and some extensions (BKZ, etc.) [6].
[0] https://en.wikipedia.org/wiki/Lattice_reduction
[1] https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%...
[2] https://www.newton.ac.uk/files/seminar/20140509093009501-202...
[3] https://arxiv.org/pdf/1003.5461.pdf
[4] https://en.wikipedia.org/wiki/Factorization_of_polynomials#F...
- abetusk 4 years agoOK, a little more context. Sophie Schmieg has a twitter thread discussing some of these issues:
https://twitter.com/SchmiegSophie/status/1367197172664389635
They mostly mirror the article this post is under, namely "show me the factors". Schnorr claims spectacular run times but it isn't clear he provides actual data from runs he's done. Schnorr also doesn't discuss complexity considerations in the detail they would deserve while focusing on basic details that I suppose wouldn't show up in a paper like this normally.
- titanomachy 4 years agoThanks for summarizing, and talking about what's novel here.
In the paper Schnorr suggests that this algorithm factors 800-bit integers in ~10^11 operations [36 bits], whereas the Number Field Sieve uses ~10^23 [76 bits]. Does that 76-bit figure represent the current state of the art, more or less?
Also, since the paper talks only in terms of specific sizes of integers, I assume there's no claimed asymptotic speedup over existing methods?
- abetusk 4 years agoLLL is effectively a polynomial time algorithm, though the exponent is large (it's like O(n^5) or O(n^6), also depending on bit length, etc.), so it might fall into the 'galactic algorithm' [0] territory.
The runtime depends on the frequency of primes, which is how you know you can run the algorithm and have a good chance of finding a "B-smooth" number. So that frequency might decrease too fast as to make it not a polynomial time or quasi-polynomial time algorithm.
In terms of my own opinion, in general practical large exponent runtime algorithms have a way of becoming increasingly more efficient so this isn't usually a big blocking point, especially not for long. For example the interior point method eventually lent itself to faster implementations. In terms of frequency of primes, I suspect this is also not a big slowdown factor.
Anyway, like I said, I've been confused about this point for a long time. It seems like this method is providing effectively a polynomial time algorithm for integer factoring. I read this paper and others by Schnorr as "this is 'efficient' in the theoretical sense but the polynomial exponent is so large as to make it computationally infeasible right now" but then I don't know why this hasn't been bigger news. Maybe because the algorithm has a randomness component to it?
I don't know. If anyone is wiser than me, I would appreciate some enlightenment.
- abetusk 4 years ago
- salawat 4 years agoMy impression was the big optimization was the success rate threshold to guide the enumeration processing more quickly to the actually correct vector, rather than wasting cycles on smaller iterative improvements, but I'm still digesting the paper though, and my linear algebra intuition is so rusty, I may need a tetanus shot from using it, and what little number theory I've got doesn't amount to enough to get a shoe shine in 1930.
- abetusk 4 years ago
- hertzrat 4 years agoIf someone wasn’t a cryptographer, but does occasional security tasks at work, what is the takeaway? RSA needs to be 4096 or higher now, or that similar techniques in the future might make RSA a bad choice altogether?
- owenmarshall 4 years agoDon’t worry - yet. This is either a nothingburger, or it’s going to be a nightmare for everyone, all at once (ever dealt with web PKI? you will get a chance if this is true)
But there’s no real current takeaway until we know if this approach works, and if so how extensible it is to RSA, especially 2048 bit RSA.
- marcosdumay 4 years agoThere are plenty of techniques in the past that make RSA a bad choice altogether.
If you are going with it anyway, yeah, 4k bits is a safe choice for making it reasonably secure right now (2k being a bare minimum), but remember, attacks always get better, never worse, and RSA has a fair share of possible attacks.
- owenmarshall 4 years ago
- tgsovlerkhgsel 4 years agoDevil's advocate: Posting the factors requires implementation work, then optimization, then a manageable but possibly still not trivial amount of resources and time - and likely a lot of trial and error. It is perfectly conceivable that a paper would be published before the implementation is actually better than a slower but heavily optimized approach. (I don't even try to understand the paper, but I've seen a mention that it's a storage tradeoff, which may make it a very different kind of optimization problem.)
Do we know that the paper is definitely from Schnorr? (Edit: The article claims its provenance is confirmed). The "destroys the RSA cryptosystem" claim is now part of the paper. While anyone can make mistakes, I would expect such claims to be at least somewhat carefully checked before releasing them.
Either way, I expect that we'll see either a retraction/correction or factors within weeks.
- hn_throwaway_99 4 years agoThis was my exact argument: https://news.ycombinator.com/item?id=26323951
Should be trivial to show a working proof on a smaller-than-usual RSA number if "this really destroys RSA".
- gojomo 4 years agoI can imagine a certain pure-theorist mindset being confident enough in their reasoning, but not yet their coding, to report this first. Or, strategically holding definitive proof back as a hammer to deploy once the doubters reveal themselves.
Why not let others do the rote reduction-to-practice?
Why not create an example where your theory was correct, & your reputation was on the line, that took a little while to resolve – but when it does, it does so definitively in your favor, so you are more trusted in future pre-definitive-verification pronouncements?
(I don't know enough about Schnorr-the-person to know if this fits his personality, but I can imagine such personalities.)
- dang 4 years agoThis was heavily discussed yesterday. (Edit: this next bit was out of date:) It seems the provenance of the paper and the 'destroy' claim are unclear.
“This destroys the RSA cryptosystem” - https://news.ycombinator.com/item?id=26321962 - March 2021 (140 comments)
- 4 years ago
- unnouinceput 4 years agoAn example, by hand, from the paper author, where he is using this algorithm to factor a number would be great. Even a small number that's easy to factor by brute force would be enough to actually proof that his claims are true. We'll do code implementation and run it against RSA challenge numbers, and see if this is a prank or the real deal.
- yipbub 4 years agoCrypto noob question: Wouldn't it be prudent to switch to something like ECDSA(heardsay that it is stronger) if there was even a hint that it was possible?
If a major government got wind that such work was going on, wouldn't it be prudent to publish before you are disappeared? I assume high-profile crypto research people are spied on.
- racecar789 4 years agoI know a lot of programming languages, but I have never wrapped my head around math notation.
Question for someone who is familiar math notation...was the abstract of this article easy to understand?
For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.
- nightcracker 4 years agoFor context: I'm a computer science MSc student.
The notation is easy to understand (and as far as mathematical notation goes, really quite tame). I don't know what a nearly shortest vector of a lattice is in this context, but I do understand everything else. Note that means I have no idea how the actual method works, but I can understand what's being claimed.
- sterlind 4 years agoNot an expert at all, but you can think of lattices as evenly-spaced grid points in a vector space. Given a set of basis vectors b0..bn, and arbitrary integers a0..an, a0b0 + ... + anbn are points on the lattice b.
You can have a "good basis" where the norms for b are low, or an equivalent "bad basis" with the same lattice points but with high norms. That's one hard problem (lattice reduction), but there are polynomial-time approximations.
The shortest vector problem, iirc, is to find the vector with the smallest norm in the best possible basis of that lattice.
- sterlind 4 years ago
- wtallis 4 years agoThe first half of the abstract is more akin to declaring the data types and structures used, and the second half is mostly a very high level summary of the overall method and results. It's not supposed to be interpreted like code. It's just setting up the context you need to start interpreting the meat of the paper, and giving you a heads-up about what background topics to Google if anything in the abstract sounds unfamiliar.
- chmod775 4 years ago> For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.
You are mistaken. (Pretty much) all of mathematics is written as natural language, and those symbols are just abbreviations for stuff that could also be written as words. If I read those sentences out loud, another could write them back down and arrive at something that looks the same.
That's why all of mathematical notation is embedded in sentences - they are part of the sentence and can be read as such.
Further that is really basic notation a first semester student of any STEM discipline should be able to read, though I wouldn't expect them to know what a lattice and some of the other terminology is.
- kfrzcode 4 years agoI'd love a "cheat sheet" or dictionary for mathematical notation. I don't know how to pronounce half of the embedded symbology, let alone what rules apply. It seems so esoteric and arbitrary sometimes, though I recognize it's most certainly not.
- chmod775 4 years ago> It seems so esoteric and arbitrary sometimes
It kind of is since a lot of it grew organically. Often it isn't even consistent across authors/countries. It wasn't really "designed".
The German Wikipedia has a decent overview of symbols and I often use it as a cheat-sheet for LaTeX, whereas the English equivalent of that article has a better explanation of basic notation.
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo...
https://de.wikipedia.org/wiki/Liste_mathematischer_Symbole?o...
Even If you don't speak German, the article might be useful because you can follow the links to the individual articles next to the symbols, then change the language back to English. Or use it as a LaTeX reference if you're like me and have trouble remembering some of the more wonky abbreviations it uses. Of course that article isn't comprehensive, but most of the stuff that is missing is very domain-specific.
- chmod775 4 years ago
- kfrzcode 4 years ago
- dutchmartin 4 years agoFor someone who took a matrix calculation (linear algebra) course like me, it was kinda understandable.
- woah 4 years agoYou will not be able to understand the notation if you do not understand the math
- nightcracker 4 years ago
- natch 4 years ago> the provenance of the paper has been confirmed: it is indeed Schnorr.
What I read is that someone contacted Schnorr over email to get this confirmation.
I’m not saying the confirmation is wrong. And I’m not saying email cannot convey information.
- ornxka 4 years agoWell, it's definitely suspect now that RSA is broken.
- AnimalMuppet 4 years ago"Email cannot convey information"? Baloney. It does all the time.
You seem to mean something different from what your words say...
- natch 4 years ago“I’m not saying...”
see?
- AnimalMuppet 4 years agoDid you edit that, or did I misread it?
- AnimalMuppet 4 years ago
- natch 4 years ago
- ornxka 4 years ago
- tandr 4 years agoWhat does "36 bits of work" mean, sorry?
- bawolff 4 years agoMy naive assumption would be, takes 2^36 cpu operations
- wtallis 4 years ago2^36 arithmetic operations is what is claimed. That's not quite the same as CPU operations, because we only have 64-bit CPUs with up to 512-bit vector instructions, but we're talking about factoring 800-bit numbers. So we need to allow for several CPU instructions to implement each of the required arithmetic operations.
- ISL 4 years agoIf so, 2^36 ~ 7 x 10^10, so a few seconds on GHz processors.
- wtallis 4 years ago
- jtsiskin 4 years agoYeah I would be great if they could translate that into core-years to match the references they listed
- aDfbrtVt 4 years agoI'm guessing it's a shorthand for the order of units of work. log2(8.4E10) = 36.3 bits of operations
- bawolff 4 years ago
- sodality2 4 years agoFactors or gtfo
- StavrosK 4 years agoGet the factors out
- StavrosK 4 years ago
- shashasha2 4 years agoIs prime factorisation used in SHA256 ? Would I be able to solo mine from my CPU again ?
- unnouinceput 4 years agoNo, to both questions. Implementing SHA256 is actually quite easy, no more than 50 lines of code. My current implementation that I use for my personal use is under 100 lines of code including variable declarations (not just executable lines of code).
- runeks 4 years agoNo. Prime factorization is used for public key cryptography, not hashing.
- unnouinceput 4 years ago
- jagger27 4 years agoThat's really all there is to it. Pudding, proof, etc.
- TacticalCoder 4 years agoI am no cryptographer. I did implement, from the paper, Yao's "socialist millionaire" cryptographic protocol but... It was only a few lines of code and a very simple (to understand, not to come up with) paper.
Now I just looked at that Schnorr paper and, well, I can tell you that I'm not going to be the one implementing it : (
- senderista 4 years agoI wonder if Schnorr is going senile like Atiyah.
- anonisko 4 years agoReminiscent of Craig Wright's claim to be Satoshi.
It doesn't matter what you claim with words if you can't back it up with cryptographic evidence.
Shut up and prove you've done (or can do) the work.
- biolurker1 4 years agoAre you really comparing a con artist with one of the most famous cryptographers?
- anonisko 4 years agoDear lord no. I can see how it might come across like that.
More drawing attention to the wider theme that we generally should not take people at their word when we have the option to demand proof of work that can't be faked or mistaken.
Don't trust. Verify.
- Ar-Curunir 4 years agoThese are not trivial algorithms to implement, and the other factorization records require months of work from implementation experts. It's not an easy task, and theory work stands independently of implementation effort.
- Ar-Curunir 4 years ago
- anonisko 4 years ago
- 4 years ago
- biolurker1 4 years ago
- bhouston 4 years agoThe first thing this will be used for is stealing Bitcoin and other cryptocurrency I predict. So watch out for your wallets.
- kinghajj 4 years agoBitcoin wallets don't use RSA, but ECDSA.
- bhouston 4 years agoI was referring to the possibility of man in the middle attacks on Bitcoin applications.
- bhouston 4 years ago
- postalrat 4 years agoOnce people figure out the math to break bitcoin then they can transfer bitcoin from any address.
I don't know why people don't bring this up more often. It will likely happen long before quantum computers make it possible.
- kinghajj 4 years agoBecause, simply, it's not true. I'm curious though what attack vector you're thinking of, especially one that's not related to quantum computers. Are you worried that the ECDSA public-key cryptosystem employed by Bitcoin will be broken, such that the private keys could somehow get derived easily from the public ones? If so, that still wouldn't let an attacker "transfer bitcoin from any address," since the addresses themselves are hashes of the public keys. So people would have to stop re-using addresses to receive bitcoin multiple times, since once an address has been the sender in a transaction, and its public key revealed, it would become vulnerable.
- kinghajj 4 years ago
- kinghajj 4 years ago
- jMyles 4 years agoI'm skeptical. The paper is too tough for me to digest without spending days/weeks/lifetimes focusing on it (and there are many who can do it much faster obviously). But I think that if RSA is materially broken, we'll know it from movements in the ground (eg, sudden mysterious forged signatures) by the time a paper is published.
I don't think that such a secret can be kept for more than a few minutes with immediately proceeding to runtime weaponization.