What does the end of mathematics look like?
62 points by awanderingmind 1 month ago | 86 comments- npodbielski 1 month agoI think people like author are positive about us, humanity, being able to build AI or something being very close to that.
I am not.
From the energy efficiency perspective human brain is very, very effective computational machine. Computers are not. Thinking about scale of infrastructure of network of computers being able to achieve similar capabilities and its energy consumption... it would be enormous. With big infrastructure comes high need of maintenance. This is costly and requires a lot of people just to prevent it from breaking down. With a lot of people being in one place, there socioeconomical cost, production, transportation needs to be build around such center. If you have centralized system, you are prone to attack from adversaries. In short I do not think we even close to what author is afraid of. We just closer to beginning to understand what is the need to actually start to think about building AI - if ever possible at all.
- awanderingmind 1 month agoI hope you are right - the point about energy efficiency is certainly spot on, and I do think it is possible that people are getting carried away by analogies when discussing the topic (I wrote something about that too, but will avoid linking to it here to avoid excessive self-promotion).
That said, the article doesn't assume such a thing will happen soon, just that it may happen at some time in the future. That could be centuries away - I would still argue the end result is something to be concerned about.
- Chris2048 1 month ago> he energy efficiency perspective human brain is very, very effective computational machine
Can you explain why you think that? Very often, mechanical efficiency outperforms biological. Humans have existed for thougsands of years, neurons even longer. Computers and AI and relatively recent, we haven't really begun to explore optimisation possibilities.
- daveguy 1 month agoThe human brain runs on about 20 watts of power -- the entire body on 80 watts (at rest). Those numbers are at least good within an order of magnitude. The largest supercomputer consumes 29 megawatts of power for 1.7 exaflops. And the largest supercomputers are nowhere near the flexible generality of a human brain -- they're calculating FFTs for the test.
The amount of parallelism in the human brain is enormous. Not just each neuron, but each synapse has computational capacity. That means ~10^14 computational units or 100 trillion processing units -- on about 20 watts.
That doesn't even touch the bandwidth issues. Getting the sensory input in and out of the brain plus the bandwidth to get all of the processing signals between each neuron is at least another petabit per second. So, on bandwidth capacity alone we are 25+ years away (assuming the last 25 years of growth continues). And in humans that comes with 18 years of training at that massive bandwidth and computational power.
Also, we have no idea what a general intelligence algorithm looks like. We are just now getting multimodal LLMs.
From the computational/bandwidth perspective we are still 30 years from a computer being able to process The information a single human brain does, except while consuming 29+ megawatts of energy. If you had to feed a human 29 megawatts worth of power no business would be profitable. Humans wouldn't even survive.
Sorry, but the notion that we are close to AGI because we have good word predictors is fantasy. But, there will be some amazing natural language human-computer interface improvements over the next 10 years!
- lioeters 1 month agoThe next logical step, perhaps ethically questionable, seems to be growing human brains for computational purposes (parallel or quantum) with high bandwidth and very efficient power consumption.
- southernplaces7 1 month agoThank you, for describing with specific details about comparative capabilities vs. energy use all the main reasons why human brains are so much, much more energy efficient at all that they can do than any current computer, LLM or algorithm.
- Chris2048 1 month ago> we have no idea what a general intelligence algorithm looks like
What's the goalpost here though? modern "AI" stuff we previously thought not possible, Proper full human-brain simulation; or General form of higher AI that could come from either place?
> The amount of parallelism in the human brain is enormous.
That only demonstrates the possibilities yet to be explored. biology has millions-of-years head start; what's possible today could be balked out a few centuries ago by the same argument as yours. You say "We are just now getting multimodal LLMs" like it's somehow late.
At a fundemental level, what holds back biology is all the other things it does (ala staying alive) and the limits imposed (e.g. heat etc) that a purpose-made device can optimise on. Any physical, thermodynamical of communication-theoric argument over what's possible would hold back both biological and mechanical devices. Only there are fewer material constraints for machines - they can even explicity exploit quantum mechanics.
> Sorry, but the notion that we are close to AGI
Seems we are arguing different things. I went back through the thread, and believe the proposition is: "us, humanity, being able to build AI or something being very close to that", which I translate as a comment on our literal species. I took your statement "From the energy efficiency perspective human brain is very, very effective computational machine" as being in that scope, and not just a reference to the current era (or Decade!).
- lioeters 1 month ago
- rcxdude 1 month agoIt may be possible to optimise silicon further, but the brain does all of its work with less than a hundred watts, while the silicon closest to its capabilities needs more like tens of kW.
- Ukv 1 month ago> while the silicon closest to its capabilities needs more like tens of kW.
I think looking at power consumption for the very edge of what technology is just barely capble of may be misleading, since that's inherently at one extreme of the current cost-capability trade-off curve[0] and stands to drop the most drastically from efficiency improvements.
You can now run models equivalent in capability to initial version of ChatGPT on sub-20w chips, for instance. Or, looking over a longer timeframe, we can now do far more on a 1-milliwatt chip[1] than on the 150kW ENIAC[2].
[0]: https://i.imgur.com/GydBGRG.png
[1]: https://spectrum.ieee.org/syntiant-chip-plays-doom
[2]: https://cse.engin.umich.edu/about/history/eniac-display/
- 1 month ago
- Ukv 1 month ago
- southernplaces7 1 month ago>we haven't really begun to explore optimisation possibilities.
So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.
- Chris2048 1 month agoIs it speculative to suggest that technology will improve? No.
> whatever it is that gives us consciousness
talk about hand-wavy; "consciousness" might not be a real thing. You might as well ask if AI has a soul.
- Chris2048 1 month ago
- daveguy 1 month ago
- awanderingmind 1 month ago
- dsign 1 month agoI love the language of this article :-)... it may be florid, but that's quintessentially human.
About the substance, I agree that there are fair grounds for concern, and it's not just about mathematics.
The best case scenario is rejection and prohibition of uses of AI that fundamentally threaten human autonomy. It is theoretically possible to do so, but since capital and power are pro-AI[^1], getting there requires a social revolution that upends the current world order. Even if one were to happen, the results wouldn't last for too long. Unless said revolution were so utterly radical that would set us in a return trajectory to the middle ages (I have something of the sort published somewhere, check my profile!).
I'm an optimist when it comes to the enabling power of AI for a select few. But I'm a pessimist otherwise: if the richest nation on Earth can't educate its citizens, what hope is there that humans will be able to supervise and control AI for long? Given our current trajectory, if nothing changes, we are set for civilization catastrophe.
[^1]: Replacing expensive human labor is the most powerful modern economic incentive I know of. Money wants, money gets.
- awanderingmind 1 month agoThanks for the positive feedback on my writing style! Based on feedback in this thread it seems to be a divisive topic, haha.
- npodbielski 1 month agoI would say that I am envy that someone can write like that. I can't write in such manner in my native language, let alone in the second one: English. It is nice to read or hear someone speaking like that, considering we are surrounded by low quality, easy to consume content nowadays.
And I am envy of such skill because I like to think about myself as not entirely being stupid, still I would never be able to write/speak this way because I just do not have an aptitude towards that.
- awanderingmind 1 month agoEnglish is my second language too - ironically I am not so as proficient in my mother tongue, due to globalisation/imperialism. But it's ok, fortunately I love the language!
- 1 month ago
- awanderingmind 1 month ago
- ninininino 1 month agoI really appreciated it too.
- npodbielski 1 month ago
- awanderingmind 1 month ago
- lmm 1 month agoThe camera didn't kill painting. Neither the bicycle nor the motor-car killed running. There are already subfields of mathematics where it's believed that all the interesting discoveries have been found and no-one is looking except for the occasional amateur - and other subfields where to even have a hope of doing cutting edge research you would need to both do multiple years of postgraduate study and then get accepted onto one of a small number of close-knit teams that are pushing that cutting edge on an industrial scale.
So I don't see any reason to worry about the impact of AI. Unlike most fields with AI worries, mathematical research isn't even a significant employment area, and people with jobs doing it could almost certainly be doing something else for more money.
- lo_zamoyski 1 month agoA recurring problem I see is that people have absolutely drunk the koolaid of the utilitarian worldview. They've been marinating in it for so long, with no exposure to anything else beyond this parochial and base existence, that they have no idea they've been marinating in it. Everything is reduced to economic exchange. Everything is reduced to economic output. Learning is reduced to something that has value only if it is "marketable", only if it results in more units of toilet paper. The human being disappears from the picture as a mere instrument of the economy, and economy that exits for its own sake, that hovers over us like a god.
Given that kind of picture of reality, it is little wonder that AI seems like such a profound threat to so many people (putting aside for the moment the distinction between the aspirations of AI companies and the actual affordances it possesses). If being human is to be an economic instrument, then any AI that could eliminate the economic value of human beings is something akin to extinction. The god of economics has no further need of you. You may die now.
But this utilitarian view of the world reeks of nihilism. It is the world of total work, of work for work's sake. We never inquire about the ends that are the very reason for work in the first place. We never come to an understanding that economies exists for us, that we create them for mutual benefit. And we never seem to grasp that the economic part of human life is only part of human life, that it exists for the sake of those parts of life, the more important and most important parts of life, that are not a matter of economics. We have come to view life as meaningless, so we run into the embrace of the god of economics, losing ourselves in its endless churn, its immediate goals, truncating our minds so that we do not conceive of anything else, longing to escape the horror of the abyss that awaits us outside of its dreary confines...
The point of studying something in a theoretical capacity is to understand it, not to produce something of economic value. Each person must come into understanding from a state of not understanding. Homo economicus does not comprehend this. Homo economicus lives to eat and shit and cum and to accumulate things.
- thomastjeffery 1 month agoAnother problem this creates is that every action is evaluated as a competition. Even collaborative work must be encapsulated in a team. The only success that can be imagined is victory. The only path to consent is by concession and contract.
This framework is explicitly enforced by copyright law. Because a copyright monopoly is automatically granted to every content creator, every person is automatically expected to participate in the copyright system.
Copyright law hinges on incompatibility. The easier it is to make compatible work, the easier it is to make derivative work, which copyright defines as the penultimate evil.
Generative statistical models (what everyone is calling AI) are calling this bluff harder than ever. Derivative work is easier than any time in history.
So what do we do about it? It's pretty obvious from my perspective that the best move forward is to eliminate copyright for everyone. It seems instead, that the most likely outcome is to eliminate copyright exclusively for the giant corporations that successfully launder their collaboration (derivative work) through large generative models.
- auggierose 1 month agoI agree with much of that, but of course, we MUST eat and shit (at least for the time being). So the economic part MUST function, otherwise all other parts are not important anymore.
- thomastjeffery 1 month ago
- jcelerier 1 month ago> The camera didn't kill painting
But it did. Painter used to be a trade where you could sell your painting skills as, well, a skill applicable for other than purely aesthetic reasons, simply because there were no other ways to document the world around you. It just isn't anymore because of cameras. Professional oil portrait painter isn't a career in 2025.
- prennert 1 month agoThe Royal Society of Portrait Painters might disagree: https://therp.co.uk/artists/
- awanderingmind 1 month agoWell, it is still a career, but it's very niche, and more attuned to 'art' than 'documenting the world'.
- sgt101 1 month agoand instead we have photographers who can document the world at a great volume. My Grandparents had no visual record of their wedding. My wife is a wedding photographer...
- sgt101 1 month ago
- Suppafly 1 month ago>Painter used to be a trade where you could sell your painting skills as, well, a skill applicable for other than purely aesthetic reasons, simply because there were no other ways to document the world around you.
Source? If anything I suspect there are more people making a living as painters now than at any point in history.
- ninalanyon 1 month agoAs a proportion of the population?
- ninalanyon 1 month ago
- prennert 1 month ago
- Chris2048 1 month ago> nor the motor-car killed running
Is running an art/vocation comparable to photography and/or painting? We no longer have mailmen who run the length of the country afaik.
But running did heavily contribute to sedentary lifestyles in western countries, along with a bunch of other things.
> mathematical research isn't even a significant employment area
I agree, I think it will move from mathematicians "doing" math, to managing computerised system that do it instead. I'm sure we already have such systems.
I think far more important to humanity is improving mathemetical-literacy. From my perspective, math is made for mathematicians - it could be more accesible. As "pure" amth matures, there is still plenty opportunity in "applied" math (however you might define it).
- lo_zamoyski 1 month ago
- mathgradthrow 1 month agoThe terrifying thing about lean and machine learning is not the idea that we will train computers on human written proof, but that we won't have to. With the rules of chess easily computed, a computer can use the bellman equations and self play to learn a policy that is vastly superior to human play, at least on the critical path.
The state space of mathematics is pretty different from chess, but I think ultimately, mathematicians are just running something like A* on the space of propositions, with a custom heuristic that is learned by approximating the result of running A* with that heuristic. where your error is just the difference between the actual and predicted length of proof.
- hliyan 1 month agoConsidering that mathematics is, at its core, a language for defining relationships between quantities, and then relationships between those relationships, so on and so forth, I think it's fair to assume that the possible number of such relationships are infinite. Some of these relationships will obviously be useful in the real world, but they don't always have to be. I too, suspect that we can keep on building theorems on top of theorems with increasing complexity, until a point is reached that it becomes just too tedious (but not impossible) for a human being to work through the proof.
- enugu 1 month ago> we can keep on building theorems on top of theorems with increasing complexity
This is a somewhat bleak picture of math. We also have the other phenomena of increasing simplicity. Both statements and proofs becoming more straightforward and simple after one has access to deeper mathematical constructions.
For example : Bezout's theorem would like to state that two curves of degree m, degree n would intersect in mn points. Except that you have two parallel lines intersecting at 0 instead of 1.1 =1 point, two disjoint circles intersect at 0 instead of 2.2=4 points, a line tangent to a circle intersecting at 1 point instead of 1.2=2 points. These exceptions merge into a simple picture once one goes to projective space, complex numbers and schemes. Complex numbers lead to lots of other instances of simplicity.
Similarly, proofs can become simple where before one had complicated ad-hoc reasoning.
Feynman once made the same point of laws of physics where in contrast to someone figuring out rules of chess by looking at games where they first figure out basic rules(how pieces move) and then moves to complex exceptions(en passant, pawn promotion), what often happens in physics is that different sets of rules for apparently distinct phenomena become aspects of a unity (ex: heat, light, sound were seen as distinct things but now are all seen as movements of particles; unification of electricity and magnetism).
Of course, this unification pursuit is never complete. Mathematics books/papers constantly seem to pull a rabbit out of a hat. This leads to 'motivation' questions for why such a construction/expression/definition was made. For a few of those questions, the answer only becomes clear after more research.
- Chris2048 1 month ago> the possible number of such relationships are infinite
I think you need to be careful taking about "infinite" in the context of math. If the number of quantities, relationships etc is finite, so are all their combinations. Even things like the infinit-ude of available numbers might have fixed patterns that render their relevant properties effecively finite, and lead to further distinctions e.g finite vs countable, etc.
Personally, I feel like math has a bit of a legacy problem. It holds on to the conventions of an art that is very old, with very different initial assumptions at its conception, and this is now holding it back somehow. I lack the background to effectivly demonstrate this other than "Things I know/understand seem less intutive in standard mathenatical terms" e.g. generating functions and/or integrals feel easier to understand (to me) when you understand the, to be software-like 'loops'.
In fact, the idea of "constructivist math" seems (again, to me) to beg for a more algorithmic/computational approach.
- el_memorioso 1 month agoThe standard explanation of integrals as summing the areas of rectangles of decreasing width seems extremely intuitive to me without requiring the baggage of having to know some computer language. Generating functions in code are basically a rote repetition of the mathematical definitions, requiring that you also understand variables and functions and other things unrelated to the core idea.
- Chris2048 1 month agoBut that "standard explanation" is a process, not a definition. Riemann sums can't be used with all integrals.
In any case, if we stick with Riemann sums, there should be a strong relationship to Generating Functions (which there is).
> Generating functions in code are basically a rote repetition of the mathematical definitions
GFs with a mathematical basis may have, for example, set-theoretic definitions that are not similar to, say, Turing machines. Any non-constructivist math is automatically not like code.
- Chris2048 1 month ago
- el_memorioso 1 month ago
- camjw 1 month ago> a language for defining relationships between quantities
Could you expand on this? I don't see maths as a language for quantities specifically (i.e. what does symmetry have to do with quantities).
> just too tedious (but not impossible) for a human being to work through the proof.
Already happened with the four colour theorem arguably.
- initramfs2 1 month agojust the zfc axioms alone are already infinite. It's an axiom schema ranging over an infinite number of actual statements. That's just statements, without even considering symbols as you're saying.
- bananaflag 1 month agoWell you can just put the nbg instead, which is finitely axiomatizable
- bananaflag 1 month ago
- Someone 1 month ago> I think it's fair to assume that the possible number of such relationships are infinite
That’s easily proven to be true. “Two plus two equals four” is a theorem, so is “three plus three equals six”, etc.
- im3w1l 1 month agoWell I think the question should be how many interesting relationships there are.
- im3w1l 1 month ago
- enugu 1 month ago
- acrophiliac 1 month ago"perhaps advanced ML research models will be more analogous to improvements in climbing gear, aiding the development of mountaineering as a sport, than an intrusion of corporate control into our minds". I would argue that GPS and Satellite-connected phones are already poisoning the wilderness experience.
- credit_guy 1 month agoTest-driven development is not about tests. It's about writing code. The tests are there just in order to keep the bugs away.
Mathematics is just proof-driven development. For an spectator it might look like mathematics is about writing proofs, but that's not different than seeing a software developer write a lot of tests. The proofs are the best tools against insidious logic bugs that the society of mathematics has come up with in the last few hundred years. Mathematicians would welcome automating all the proofs, just like software engineers are happy for code assistants to take over the task of writing tests.
- A_D_E_P_T 1 month agoWe'll know that the time draws nearer when an AI confirms or refutes Mochizuki's proof of the abc conjecture. As of right now, I don't think they're capable of doing that. And, as they can't even check a very (very!) complex proof, they won't be able to conjure any inhumanly complex proofs de novo.
Also:
> To expand: what if the practice of mathematics becomes completely determined by the diktats of a vast capitalist machinery of proprietary machine learning models churning out proof after proof, and theory after theory, conjured from the aether of all possible true statements?
I don't think that this is possible even in theory, as computational resources are limited and "the aether of all possible true statements" is incomprehensively vast. (There's a massive orders-of-magnitude difference in size between true-seeming-yet-false statements and the number of elementary particles in the visible universe. More statements than particles.) You can't brute force it.
- andyjohnson0 1 month ago> We'll know that the time draws nearer when an AI confirms or refutes Mochizuki's proof of the abc conjecture. As of right now, I don't think they're capable of doing that. And, as they can't even check a very (very!) complex proof, they won't be able to conjure any inhumanly complex proofs de novo.
I agree, but... Spend time formalising a large part of existing mathematics and proofs, train a bunch of sufficiently powerful and generative models with that, and with cooperative problem solving and proof strategies, and give them access to proof assistants and adequate compute resources, and something interesting could happen.
I suspect the barrier is finding a business model that would pay for this. Turning mathematics into an industrial, extruded-on-demand product might work, but I dont know who (except maybe the NSA) would stump-up the money.
- awanderingmind 1 month agoI am not suggesting models will be capable of generating 'all' proofs - that is clearly impossible. Merely that they will get better at doing so, and there is no clear reason at the moment to believe they will never reach a human level of competence. If you have one model functioning at such a level, it is presumably trivial to have a million of them, none of which will need to be paid, housed, or sleep etc.
- n4r9 1 month agoWhy would an AI confirmation or rejection be more convincing than the proof itself?
- zarzavat 1 month agoPresumably an AI would formalise the proof in a system such as Lean, then you only need to trust the kernel of that proof system.
Rejecting a proof would be more complicated, because while for confirming a proof you only need to check that the main statement in the formalisation matches that of the conjecture, showing that a proof has been rejected requires knowledge of the proof itself (in general).
- lblume 1 month ago> requires knowledge of the proof itself (in general)
Why? If a proof is wrong it has to be locally invalid, i.e. draw some inference which is invalid according to rules of logic. Of course the antecedent could have been defined pages earlier, but in and of itself the error must be local, right?
- lblume 1 month ago
- esperent 1 month agoRejection: an incredibly complex proof can fall for (comparatively) simple reasons. If the AI scans the entire proof and says yep, there's the flaw, page 126 theorem X contradicts <well established known theorem> then a human can verify this without having to understand the whole proof.
This could lead to the proof being rejected entirely, or fixed and strengthened.
Confirmation: if the AI understands it well enough that we're even considering asking it to confirm the proof, then you can do all kinds of things. You can ask it to simplify the entire proof to make it easier for humans to verify. You can ask it questions about parts of the proof you don't understand. You can ask it if there's any interesting corollaries or applications in other fields. Maybe you can even ask it to rewrite the whole thing in LEAN (although, like the author, I know nothing about LEAN and have no idea if this would be useful).
- zarzavat 1 month ago
- bryanrasmussen 1 month agoalso why would capitalists expand resources on churning out proof after proof when mathematical proofs are not patentable?
- awanderingmind 1 month agoA reasonable question - another way of looking at it is that theorems are just a side effect of mathematical research. Much of the world economy depends on things like cryptography, which involves a bunch of theorems. The question is then 'what as yet undiscovered mathematical realms might models think up that could make people money'? It is hard to imagine what doesn't exist yet, but much harder to imagine that all potentially profitable mathematics has already been discovered. This could 'just' look like algorithmic improvements.
- auggierose 1 month agoThat will change quickly if capitalists see that as an obstacle. Right now, they don't really care about that, as for example software (much of which is just math) is already patentable in the US.
- awanderingmind 1 month ago
- andyjohnson0 1 month ago
- skybrian 1 month agoI don’t share the author’s concern about a corporate takeover of mathematics. Most mathematics isn’t of commercial interest. Even when it is, it seems like there would often good reason to share it, like any other source code. Is Lean so different from other programming languages?
Such libraries would need documentation, or nobody would know when to use them, and then sharing is pointless.
If corporations build them, they would have to decide what to contribute to the commons and what to keep private. But that’s no different than any other language.
- 1 month ago
- hackable_sand 1 month agoWhen does math become recreational for people?
- plopilop 1 month agoHow is that any different than people programming for fun?
- coolcase 1 month agoAt a young age?
- awanderingmind 1 month agoIn my case, when you aren't paid to do it, and don't have (much) time to study it formally anymore.
- srean 1 month agoWhen you do it with love
- moffkalast 1 month agoIn poker?
- plopilop 1 month ago
- curtisszmania 1 month ago[dead]
- gizajob 1 month ago[flagged]
- N2yhWNXQN3k9 1 month agoThis article is written in an unnecessarily extravagant style, IMO.
Also, I appreciate anonymity, but, to my point
> I live by myself in a remote mountain cave beyond the ken of civilised persons, and can only be contacted during a full moon, using certain arcane rites that are too horrible to speak of.
Okay.
- BSDobelix 1 month ago>>I live by myself in a remote mountain cave
= I live in California, and the nearest Starbucks is more than 20 miles away.
>>can only be contacted during a full moon
= As a night person, I am awake when the streetlight outside my house turns on.
>>certain arcane rites that are too horrible to speak of
= In order to contact me, you must install Microsoft Teams.
Overall, it's not that bad, except for the MS team thing. ;)
- bubblyworld 1 month agoThe man is south african, we are geologically blessed and have a lot of pleasant remote mountain caves =)
- BSDobelix 1 month agoOkay, I will revert everything. The cave, the moon, and the rituals are all real. It's impressive how much reality is based on location :-)
- awanderingmind 1 month agoTrue!
I don't literally live in a cave, but fortunately not everyone is so allergic to whimsical language :D.
- Chris2048 1 month agoThe moon thing might still not be literal, similar to "once in a blue moon" i.e. rarely.
- BSDobelix 1 month ago
- bubblyworld 1 month ago
- gilleain 1 month agoPerhaps the writer is a fan of Grothendieck who did almost exactly this - lived in a remote village in the Pyrenees
https://en.wikipedia.org/wiki/Alexander_Grothendieck#Retirem...
"Local villagers helped sustain him with a more varied diet after he tried to live on a staple of dandelion soup." - like most people would.
- nottorp 1 month agoAs always, HN has no sense of humour...
- Chris2048 1 month agoIt does, but it has to be particularly funny, and the author has to tike on the risk there. Otherwise, there are too many people who think themselves humourous, but otherwise have little to contribute. The result is current-day reddit. It's beyond cringe, it's just low-effort repatative humour. some think it's just dead-internet/botted-to-hell, but I suspect even bots can make a better effort.
- ZYbCRq22HbJ2y7 1 month agoI don't think the commenter was saying anything about the quality of humor in that statement. Rather, they were attempting to relay that the style of writing is overly verbose, even on the contact page. Also, isn't the parent comment attempting to be humorous as well?
- deadbabe 1 month agoHumor is banned here.
- awanderingmind 1 month agoSome people do!
- milesrout 1 month agoWriting like a wanker isn't funny.
- vixen99 1 month agoWhy not just say to yourself, - "I don't think that's funny" - instead of generalizing on behalf of others with a word that is in essence, meaningless or rather, it means what you personally want it to mean for any particular occasion?
- vixen99 1 month ago
- Chris2048 1 month ago
- xdfgh1112 1 month agoAgreed if I had saw that first I wouldn't have clicked
- BSDobelix 1 month ago