Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
885 points by embedding-shape 20 hours ago | 434 commentsWhile the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
- tptacek 18 hours agoThey already are against the rules here.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(This is a broader restriction than the one you're looking for).
It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.
- embedding-shape 12 hours agoThanks for a lot of references!
One comment stands out to me:
> Whether to add it to the formal guidelines (https://news.ycombinator.com/newsguidelines.html) is a different question, of course. I'm reluctant to do that, partly because it arguably follows from what's there, partly because this is still a pretty fuzzy area that is rapidly evolving, and partly because the community is already handling this issue pretty well.
I guess me raising this question is because it feels maybe slightly off that people can't really know about this unwritten rule until they break it or see someone else break it and people tell them why. It is true that the community seems to handle it with downvotes, but it might not be clear enough why something gets downvoted, people can't see the intent. And it also seems like an inefficient way of communicating community norms, by telling users about them once they've broken them.
Being upfront with what rules and norms to follow, like the guidelines already do for most things, feels more honest and welcoming for others to join in on discussions.
- tptacek 12 hours agoThe rules are written; they're just not all in that one document. The balance HN strikes here is something Dan has worked out over a very long time. There's at least two problems with arbitrarily fleshing out the guidelines ("promoting" "case law" to "statutes", as it were):
* First, the guidelines get too large, and then nobody reads them all, which makes the guideline document less useful. Better to keep the guidelines page reduced down to a core of things, especially if those things can be extrapolated to most of the rest of the rules you care about (or most of them plus a bunch of stuff that doesn't come up often enough to need space on that page).
* Second, whatever you write in the guidelines, people will incline to lawyer and bicker about. Writing a guideline implies, at least for some people, that every word is carefully considered and that there's something final about the specific word choices in the guidelines. "Technically correct is the best kind of correct" for a lot of nerds like us.
Perhaps "generated comments" is trending towards a point where it earns a spot in the official guidelines. It sure comes up a lot. The flip side though is that we leave a lot of "enforcement" of the guidelines up to the community, and we have a pretty big problem with commenters randomly accusing people of LLM-authoring things, even when they're clearly (because spelling errors and whatnot) human-authored.
Anyways: like I said, this is pretty well-settled process on HN. I used to spend a lot of time pushing Dan to add things to the guidelines; ultimately, I think the approach they've landed on is better than the one you (and, once, I) favored.
- tptacek 12 hours ago
- Rendello 18 hours agoThis is the correct answer. If you're curious about what other sorts of things are disallowed by common law, look at dang and tomhow's comments that say "please don't":
dang: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
tomhow: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
- mjmas 13 hours agoCorrected second link: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
- Akronymus 17 hours ago> This is the correct answer.
Where does that saying come from? I keep seeing it in a lot of different contexts but it somehow feels off to me in a way I can't really explain.
- sfink 17 hours agoIt's not "off" unless you're simply reading it literally. If you do that, then it's a verbose way of saying "I agree". But the connotations are something like "I agree, strongly, and in particular am implying (possibly just for effect) that there are objectively right and wrong answers to this question and the other answers are wrong." The main difference is the statement that there is an objective answer to what people may be treating as a subjective question.
If it helps, you can think of it as saying more about possible disagreeing opinions than about the specific opinion expressed. "This answer is right, and the people who disagree are 'objectively' wrong."
It took me some time to catch on to this. It can certainly be jarring or obnoxious, though sometimes it can be helpful to say "yo people, you're treating this like a subjective opinion, but there are objective reasons to conclude X."
- Rendello 17 hours agoIt's the first time I've ever commented that, and I was trying to figure out a way to omit it. I don't like that sort of phrase either, I especially hate comments that just go "This.", but they're rare on HN so I'm in good company.
Ultimately, I put it because:
- It was the most directly informative comment on the thread;
- It had been downvoted (greyed out) to the very bottom of the thread; and
- I wanted to express my support before making a fairly orthogonal comment without whiplashing everyone.
The whiplashing concern is the problem I run into most generally. It can be hard to reply to someone with a somewhat related idea without making it seem like you're contradicting them, particularly if they're being dogpiled on with downvotes or comments. I'd love to hear other ways to go about this, I'm always trying to improve my communication.
- NedF 13 hours ago[dead]
- sfink 17 hours ago
- versavolt 13 hours agoThat answer is incorrect. Common law can only be created by courts.
- floxy 10 hours ago
- AnimalMuppet 12 hours agoUh huh. And, as tptacek said, dang and tomhow are the courts here. So what they have consistently ruled is the common law here.
- floxy 10 hours ago
- mjmas 13 hours ago
- embedding-shape 12 hours ago
- TomasBM 18 hours agoYes.
The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.
Everyone should be free to read, interpret and formulate their comments however they'd like.
But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.
And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.
- christoff12 15 hours agoGood point
- christoff12 15 hours ago
- flkiwi 18 hours agoI read comments citing AI as essentially equivalent to "I ran a $searchengine search and here is the most relevant result." It's not equivalent, but it has one identical issue and one new-ish one:
1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not
On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.
I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.
- pyrale 16 hours ago> "I ran a $searchengine search and here is the most relevant result."
Except it's "...and here is the first result it gave me, I didn't bother looking further".
- giancarlostoro 17 hours ago> 2. People behave as if they believe AI results are authoritative, which they are not
Web search has the same issue. If you don't validate it, you wind up in the same problem.
- 9rx 15 hours ago> people are demonstrating a new behavior that is disrupting social norms
The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.
- sapphicsnail 14 hours agoThe issue isn't people posting AI generated comments on the Internet as a whole, it's whether it should be allowed in this space. Part of the reason I come to HN is the quality of comments are pretty good relative to other places online. I think it's a legitimate question whether AI comments would help or hinder discussion here.
- 9rx 14 hours agoThat's a pretty good sign that the HN user base as a rule finds most enjoyment in writing high quality content for themselves. All questions are legitimate, but in this circumstance what reason is there to believe that they would find even more enjoyment from reducing the quality?
It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.
Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!
- edmundsauto 11 hours agoWould you object to high quality AI comments?
- 9rx 14 hours ago
- terribleperson 2 hours agoHas it? More than one forum has expected that commentary should contribute to the discussion. Reddit is the most prominent example, where originally upvotes were intended to be used for comments that contributed to the discussion. It's not the first or only example, however.
Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.
- sapphicsnail 14 hours ago
- WorldPeas 14 hours agoI think it's closer in proximity to the "glasshole" trend, where there social pressure actually worked to make people feel less comfortable about using it publicly. This is an entirely vibes based judgement, but presenting unaltered ai speech within your own feels more imposing and authoritative(as wagging around an potentially-on camera did then). This being the norm on other platforms has degraded my willingness to engage with potentially infinite and meaningless streams of bloviation rather than the (usually) concise and engaging writings of humans
- icoder 15 hours agoTotally agree if the AI or search results are a (relatively) direct answer to the question.
But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'
- 0manrho 6 hours ago> ($AI) suggests
Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.
That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.
It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.
- 0manrho 6 hours ago
- munchbunny 15 hours ago> 2. People behave as if they believe AI results are authoritative, which they are not
I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.
- flkiwi 15 hours agoThis is a big of a gravity vs. acceleration issue, in that the end result is indistinguishable.
- flkiwi 15 hours ago
- charcircuit 17 hours ago>If I wanted to run a web search, I would have done so
While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.
- MarkusQ 17 hours agoThis begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?
If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?
- charcircuit 17 hours agoIt's more like someone asks if there are McDonald's in San Francisco, and then someone else searches "mcdonald's san francisco" on Google Maps and then replies with the result. It would have been faster for the person to just type their question elsewhere and get the result back immediately instead of someone else doing it for them.
- charcircuit 17 hours ago
- allenu 17 hours agoI think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.
This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).
- charcircuit 16 hours agoI've seen this happen too. People will comment and say in the comment that they can't remember something when they could have easily refound that information with chatgpt or google.
- charcircuit 16 hours ago
- officeplant 17 hours ago> If they just instead opened up chatgpt they could have instantly gotten their answer.
Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?
- Kim_Bruning 13 hours agoCounterpoint:
Frankly, it's a skill thing.
You know how some people can hardly find the back of their own hands if they googled them?
And then there's people (like eg. experienced wikipedians doing research) who have google-fu and can find accurate information about the weirdest things in the amount of time it takes you to tie your shoes and get your hat on.
Now watch how someone like THAT uses chatgpt (or some better LLM) . It's very different from just prompting with a question. Often it involves delegating search tasks to the LLM (and opening 5 google tabs alongside besides) . And they get really interesting results!
- Kim_Bruning 13 hours ago
- droopyEyelids 17 hours agoWell put. There are two sides of the coin: the lazy questioner who expects others to do the work researching what they would not, and the lazy/indulgent answerer who basically LMGTFY's it.
Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.
- WorldPeas 14 hours agoI haven't thought about LMGTFY since stackoverflow. Usually though I see responses with people thrusting forth AI answers that provide more reasoning, back then LMGTFY was more about rote conventions(e.g. "how do you split a string on ," and ai is used more for "what are ways that solar power will change grid dynamics")
- WorldPeas 14 hours ago
- MarkusQ 17 hours ago
- Terr_ 17 hours agoAgreed on the similar-but-worse comparison to to the laziest possible web-searches of yesteryear.
To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.
- flkiwi 16 hours agoThere’s also a whole “gosh golly look at me using the latest fad!” demonstration aspect to this. People status signaling that they’re “in”. Thus the Bluetooth earpiece comment.
It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.
- flkiwi 16 hours ago
- ozgung 14 hours agoI think doing your research using search engine/AI/books and paraphrasing your findings is always valuable. And you should cite your resources when you do so, eg. “ChatGPT says that…”
> 1. If I wanted to run a web search, I would have done so
Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.
2. People behave as if they believe AI results are authoritative, which they are not
AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.
Any strict rule/ban would be very premature and shortsighted at this point.
- pyrale 16 hours ago
- masfuerte 19 hours agoDoes it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.
- eskori 19 hours agoIf HN mods think the rule should be applied whatever the community thinks (for now), then yes, it needs a rule.
As I see it, down-voting is an expression of the community posture, rules are an expression of the "space" posture. It's up to the space to determine if there is something relevant enough to include it in the rules.
And again, as I see it, community should also have a way to at least suggest modifications of the rules.
I agree with you in "People who can't take a hint aren't going to read the rules". But as they say: "Ignorance of the law does not exempt one from compliance."
- rsync 19 hours agoThis is my view.
I tend to dislike these type of posts but a properly designed and functioning vote mechanism should take care of it.
If not, it is the voting mechanism that should be tuned - not new rules.
- dormento 18 hours ago> These comments already get heavily down-voted.
Can't find the link right now (cause why would i save a thread like that..) but I've seen more than once situations where people get defensive of others that post AI slop comments. Both times it was people in YC companies that have personal interest related to AI. Both times it looked like a person defending sockpuppets.
- al_borland 17 hours agoI think it helps having guidelines and not relying on user sentiment alone. When I first joined HN I read the guidelines and it did make me alter my comments a bit. Hoping everyone who joins goes back to review the up/down votes on their comments and then take away the right lesson with limited information as to why those votes were received seems like wishful thinking. For those who do question why they keep getting downvoted, it might lead them to check the guidelines and finding the right supporting information would be useful.
A lot of the guidelines are about avoiding comments that aren’t interesting. A copy/paste from an LLM isn’t interesting.
- BrtByte 17 hours agoHN tends to self-regulate pretty well
- notahacker 17 hours agoI'm veering towards this being the answer. People downvote the superfluous "I don't have any particular thoughts on this, but here's what a chatbot has to say" comments all the time. But also, there are a lot of discussions around AI on HN, and in some of those cases posting verbatim responses from current generation chatbots is a pretty good indication of they can give accurate responses when posed problems of this type or they still make these mistakes or this is what happens when there's too much RHLF or a silly prompt...
- eskori 19 hours ago
- stack_framer 17 hours agoI'm here to learn what other people think, so I'm in favor of not seeing AI comments here.
That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"
I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.
- anotherevan 12 hours agoI have actually been using em dashes more mainly because of everyone whinging about them.
- lillecarl 6 hours agoI asked an LLM and he said that antisocial behavior is the coolest thing in 2025
- lillecarl 6 hours ago
- anotherevan 12 hours ago
- josefresco 19 hours agoAs a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.
We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.
- superfishy 19 hours agoI agree. The alternative is prohibiting this practice and having these posters not disclose their use of LLMs, which in many cases cannot really be easily detected.
- TulliusCicero 18 hours agoNo, most don't think they're doing anything wrong, they think they're actually being helpful. So, most wouldn't try to disguise it, they'd just stop doing it, if it was against the rules.
- abustamam 15 hours agoAgreed with them not thinking they're doing anything wrong. Disagree with them not wanting to disguise it. If they don't think they're doing anything wrong, then they likely don't think it's against the rules. If they knew it were against the rules, they'd probably disguise it better.
This may actually be a good thing because it'd force them to put some thought into dissecting the comment from AI instead of just pasting it in wholesale. Depending on how well they try to disguise it, of course.
- abustamam 15 hours ago
- TulliusCicero 18 hours ago
- superfishy 19 hours ago
- tpxl 19 hours agoI think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
- mattkrause 19 hours agoI do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).
I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.
- MBCook 19 hours agoThat provides value as you’re comparing (and hopefully analyzing) output. It’s totally on topic.
In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.
- autoexec 13 hours agoIt's always fun when people point out an LLMs insane responses to simple questions that shatter the illusion of them having any intelligence, but besides just giving us a good laugh when AI has a meltdown failing to produce a seahorse emoji, there are other times it might be valuable to discuss how they respond, such as when those responses might be dangerous, censored, or clearly being filled with advertising/bias
- MBCook 19 hours ago
- dormento 18 hours agoIMHO its far worse than "I googled this". Googling at least requires a modicum of understanding. Pasting slop usually means that the person couldn't be bothered to filter out garbage, but wants to look smart anyway.
- tptacek 17 hours agoThey are already banned.
- venturecruelty 11 hours agoWeird that I keep seeing them then.
- tptacek 11 hours agoThat's what the "flag" button is for.
- tptacek 11 hours ago
- venturecruelty 11 hours ago
- Ekaros 19 hours agoI think "I googled this" can be valid and helpful contribution. For example looking up some statistic or fact or an year. If that is also verified and sanity checked.
- sejje 19 hours agoYes, while citing an LLM in the same way is probably not as useful.
"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.
The LLM skips a step, and gets you right to the "unusable source" argument.
- Ekaros 19 hours agoI agree. Telling I googled this and someone has this opinion is pretty useless. Be that someone a LLM or random poster on internet.
Still, I will fight that someone actually doing the leg work even by search engine and reasonable evaluation on a few sources is often quite valuable contribution. Sometimes even if it is done to discredit someone else.
- Ekaros 19 hours ago
- TulliusCicero 18 hours ago"I googled this" usually means actually going into a page and seeing what it says, not just copy-pasting the search results page itself, which is the equivalent here.
- skywhopper 17 hours agoIn that case, the correct post here would be to say “here’s the stat” and cite the actual source (not “I googled it”), and then add some additional commentary.
- sejje 19 hours ago
- zby 17 hours agoThe contribution is the prompt.
- mattkrause 19 hours ago
- Rebelgecko 18 hours agoThis is not just about banning a source; it is about preserving the core principle of substantive, human-vetted content on HN. Allowing comments that are merely regurgitations of an LLM's generic output—often lacking context, specific experience, or genuine critical thought—treats the community as an outsourced validation layer for machine learning, rather than an ecosystem for expert discussion. It's like allowing a vending machine to contribute to a Michelin-starred chef's tasting menu: the ingredients might be technically edible, but they completely bypass the human skill, critical judgment, and passion that defines the experience. Such low-effort contributions fundamentally violate the "no shallow dismissals" guideline by prioritizing easily manufactured volume over unique human insight, inevitably degrading the platform's high signal-to-noise ratio and displacing valuable commentary from those who have actually put in the work.
- teach 16 hours agoslow clap
A tip of the hat for this performance art
- UncleEntity 16 hours agoOne has to wonder if this is how the next generation of kids are going to write after being raised exclusively on AI generated content?
- WorldPeas 14 hours agoIn this digital world, the core components of writing can feel overwhelming, by leveraging crutches learned by reading hundreds of dead internet comments, the core principles of writing in an ever-shifting landscape can be more crucial than ever.
(written by a human with help from https://aiphrasefinder.com/common-chatgpt-phrases/)
- WorldPeas 14 hours ago
- UncleEntity 16 hours ago
- yellow_lead 18 hours agoPlease stop
- teach 16 hours ago
- gortok 19 hours agoWhile we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
- sbrother 19 hours agoI strongly agree with this sentiment and I feel the same way.
The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.
- SAI_Peregrinus 17 hours agoIf I want to participate in a conversation in a language I don't understand I use machine translation. I include a disclaimer that I've used machine translation & hope that gets translated. I also include the input to the machine translator, so that if someone who understands both languages happens to read it they might notice any problems.
- KaiserPro 14 hours agoYou are adding your comments and translating them, thats fine.
If it was just a translation, then that adds no value.
- MLgulabio 15 hours agoYou are joking right?
I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.
I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)
- KaiserPro 14 hours ago
- kps 19 hours agoWhen I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.
- sejje 19 hours agoI think multi -language forums with AI translators is a cool idea.
You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.
I think building it as a forum feature rather than a browser feature is maybe worth.
- sejje 19 hours ago
- guizadillas 18 hours agoNon-native English speaker here:
Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language
- coffeefirst 18 hours agoBetter yet, I prefer to read some unusual word choices from someone who’s clearly put a lot of work into learning English than a robot.
- coffeefirst 18 hours ago
- emaro 19 hours agoAgreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
- SoftTalker 17 hours agoI honestly think that very few people here are completely non-conversant in English. For better or worse, it's the dominant language. Amost everyone who doesn't speak English natively learns it in school.
I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.
- SoftTalker 17 hours ago
- parliament32 18 hours ago> I'm not sure what the solution here
The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!
- Kim_Bruning 16 hours agoGoogle translate used to be the best, but it's essentially outdated technology now, surpassed by even small open-weight multilingual LLMs.
Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.
* Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.
* always proof-read if you are at all able to!
Ultimately you should be responsible for your own posts.
- akavi 17 hours agoYou are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?
(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)
- smallerfish 17 hours agoGoogle Translate doesn't hold a candle to LLMs at translating between even common languages.
- lurking_swe 16 hours agoIMO chatgpt is a much better translator. Especially if you’re using one of their normal models like 5.1. I’ve used it many times with an obscure and difficult slavic language that i’m fluent in for example, and chatgpt nailed it whereas google translate sounded less natural.
The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”
Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.
I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.
- Kim_Bruning 16 hours ago
- estebarb 16 hours agoI have found that prompting "translate my text to English, do not change anything else" works fine.
However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.
- justin66 19 hours agoAs AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.
- carsoon 14 hours agoI think even when this is used they should include "(translated by llm)" for transparency. When you use a intermediate layer there is always bias.
I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.
If a user doesn't speak a language well, they won't know whether their meanings were altered.
- tensegrist 19 hours agoone solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish
i don't think it is likely to catch on, though, outside of culturally multilingual environments
- internetter 19 hours ago> i don't think it is likely to catch on, though, outside of culturally multilingual environments
It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.
- internetter 19 hours ago
- AnimalMuppet 19 hours agoMaybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.
- jampa 17 hours agoI wrote about this recently. You need to prompt better if you don't want AI to flatten your original tone into corporate speak:
https://jampauchoa.substack.com/p/writing-with-ai-without-th...
TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.
- SAI_Peregrinus 17 hours ago
- hotsauceror 19 hours agoI agree with this sentiment.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
- giancarlostoro 17 hours agoYou can have the same problem with Googling things, LLMs usually form conclusions I align with when I do the independent research. Google isn't anywhere near as good as it was 5 years ago. All the years of crippling their search ranking system and suppressing results has caught up to them to the point most LLMs are Google replacements.
- ndsipa_pomu 17 hours agoTo my mind, it's like someone saying "I asked Fred down at the pub and he said...". It's someone stupidly repeating something that's likely stupid anyway.
- JeremyNT 19 hours agoIn a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
- lanstin 17 hours agoYeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.
- dogleash 16 hours ago> can actually be pretty useful. It indicates somebody already minimally investigated a thing
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
- lanstin 17 hours ago
- mikkupikku 15 hours agoThese days, most people who try googling for answers end up reading an article which was generated by AI anyway. At least if you go right to the bot, you know what you're getting.
- MetaWhirledPeas 16 hours ago> When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
- dogleash 16 hours agoI am amused by the defeatism in your response that expecting anyone to actually try anymore is a lost cause.
- dogleash 16 hours ago
- KaiserPro 14 hours ago"lets ask the dipshit" is how my colleague phrases it
- gardenhedge 19 hours agoI disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
- OptionOfT 19 hours agoBut I'm not interested in the AI's point of view. I have done that myself.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
- JoshTriplett 17 hours agoIf I wanted to consult an AI, I'd consult an AI. "I consulted an AI and pasted in its answer" is worse than worthless. "I consulted an AI and carefully checked the result" might have value.
- OptionOfT 19 hours ago
- giancarlostoro 17 hours ago
- SunshineTheCat 17 hours agoI am just sad that I can no longer use em dashes without people immediately assuming what I wrote was AI. :(
- dinkleberg 16 hours agoSome will blindly dismiss anything using them as AI generated, but realistically the em-dash is only one sign among many. Way more obvious is the actual style of the writing. I use Claude all of the time and I can instantly tell if a blog post I’m reading was written with Claude. It is so distinctive. People use some of the patterns it uses some of the time. But it uses all of them all of the time.
- Kim_Bruning 13 hours agoYou're absolutely right. No wonder you can recognize it so easily. Let me just sit with that.
edit 1: The sincerest form of flattery
edit 2: To be fair, Claude Opus 4.5 seems to encourage people to be nicer to each other if you let them.
- Kim_Bruning 13 hours ago
- MarkusQ 17 hours agoGo ahead, use em—let the haters stew in their own typographically-impoverished purgatory.
- 17 hours ago
- dinkleberg 16 hours ago
- whimsicalism 19 hours agoI think there's well done and usually unnoticeable and poorly done and insulting. I don't agree that the two are always the same, but I think lots of people might think they are doing the former but are not aware enough to realize they are doing the latter.
- amelius 17 hours ago"I asked AI and it said basically the same as you."
- amelius 17 hours ago
- Semiapies 14 hours agoIt's at least a factor in why I value HN commentary so much less than I used to.
- Balgair 18 hours agoAside:
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
- Kim_Bruning 13 hours agoI actually use LLMs to help me dig up the sources. It's quicker than google and you get them nicely formatted besides.
But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.
But yeah, using LLMs to help with actually doing the research? Totally a thing.
- officeplant 17 hours ago>Should asking for sources be banned too?
IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.
But yes it is rude to just respond "source?" unless they are making some wild batshit claims.
- Kim_Bruning 13 hours ago
- neltnerb 17 hours agoI think what's important here is to reduce harm even if it's still a little annoying. Because if you try to completely ban mentioning something is LLM written you'll just have people doing it without a disclaimer...
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
- sejje 19 hours agoThis is the only reasonable take.
It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.
Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.
Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.
- fwip 17 hours agoI'd love to see that for article submissions, as well.
- manmal 17 hours agoWhat LLM generate is an amalgamation of human content they have been trained on. I get that you want what actual humans think, but that’s also basically a weighted amalgamation. Real, actual insight, is incredibly rare and I doubt you see much of it on HN (sorry guys; I’ll live with the downvotes).
- dinkleberg 15 hours agoWhy do you suppose we come to HN if not for actual insight? There are other sites much better for getting an endless stream of weighted amalgamations of human content.
- dogleash 15 hours agoI'm downvoting exclusively for your comment about downvotes.
- dinkleberg 15 hours ago
- fwip 17 hours ago
- SoftTalker 17 hours agoAgree and I think it might also be useful to have that be grounds for a shadowban if we start seeing this getting out of control. I'm not interested, even slightly, in what an LLM has to say about a thread on HN. If I see an account posting an obvious LLM copy/paste, I'm not interested in seeing anything from that account either. Maybe a warning on the first offense is fair, but it should not be tolerated or this site will just drown in the slop.
- 17 hours ago
- ferngodfather 13 hours agoYeah like if I wanted to know what a particular AI says, I'd have asked it..
- crazygringo 16 hours agoI actually disagree, in certain cases. Just today I saw:
https://news.ycombinator.com/item?id=46204895
when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.
When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.
If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.
- zacmps 16 hours agoLLM summaries of papers often make overly broad claims [1].
I don't think this is a good example personally.
- crazygringo 15 hours agoWhen there's nothing else to go on, it's still more useful than nothing.
The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.
I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.
- crazygringo 15 hours ago
- Rarebox 15 hours agoThat's a pretty good example. The summary is actually useful, yet it still annoys me.
But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.
One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.
- crazygringo 15 hours agoI definitely read the comments to learn. I love when there's a post about something I didn't know about, and I love when HN'ers can explain details that the post left confusing.
If I'm looking for entertainment, HN is not exactly my first stop... :P
- crazygringo 15 hours ago
- zacmps 16 hours ago
- that_guy_iain 16 hours agoThere will be many cases you won't even notice. When people know how to use AI to help with their writing, it's not noticable.
- BrtByte 17 hours agoHN is the mix of personal experience, weird edge cases, and even the occasional hot take. That's what makes HN valuable
- delfinom 18 hours agoIt's kinda funny how we once in internet culture had "lmgtfy" links because people weren't just searching google instead of asking questions.
But now people are vomiting chatgpt responses instead of linking to chatgpt.
- TheAdamist 16 hours agoSame acronym still works, just swap gemini in place of google.
- subscribed 17 hours agoNo, linking to chatgpt is not a response. For some sort of questions it (which model exactly is it?) might be better, for some might be worse.
- TheAdamist 16 hours ago
- danielmarkbruce 16 hours agoAnd yet people ask for sources all the time. "I don't care what you think, show me what someone else thinks".
- deadbabe 18 hours agoOn a similar sentiment, I’m sick and tired of people telling others to go google stuff.
The point of asking on a public forum is to get socially relatable human answers.
- subscribed 17 hours agoYeah, but you get two extremes.
Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"
Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.
Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.
But sure, sometimes you get the other kind. Very rarely.
- jedbrooke 18 hours agoI’ve seen so many SO and other forum posts where the first comment is someone smugly saying “just google it, silly”.
Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply
- jquery 16 hours agoOr worse, you google an obscure topic and the top reply is “apple mountain sleep blue chipmunk fart This comment was mass deleted with Redact” and the replies to that are all “thanks that solved my problem”
- jquery 16 hours ago
- delecti 17 hours agoAgreed, with a caveat. If someone is asking for an objective answer which could be easily found with a search, and hasn't indicated why they haven't taken that approach, it really comes across as laziness and offloading their work onto other people. Like, "what are the best restaurants in an area" is a good question for human input; "how do you deserialize a JSON payload" should include some explanation for what they've tried, including searches.
- cindyllm 18 hours ago[dead]
- subscribed 17 hours ago
- delaminator 16 hours agoWhile, I don't disagree with the general sentiment, a black and white ban leaves no room for nuance.
I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.
- stephen_g 13 hours agoBut if I wanted to ask an AI I would put that into ChatGPT, not ask HN. I would only ask that on HN if I wanted other people's opinions!
You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.
- stephen_g 13 hours ago
- zby 17 hours agoI strongly disagree - when I post something that AI wrote I am doing it because it explains my thoughts better than I can - it digs deeper and finds the support for intuitions that I cannot explain nicely. I quote the AI - because I feel this is fair - if you ban this you would just lose the information that it was generated.
- pc86 14 hours agoIf an LLM writes better than you do, you need to take a long look in the mirror and figure what you can do to fix that, because it's not a good thing.
- SunshineTheCat 17 hours agoThis is like saying "I use a motorized scooter at walmart, not because I can't walk, but because it 'walks' better than I can."
- officeplant 17 hours ago> if you ban this you would just lose the information that it was generated.
The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.
- i80and 17 hours agoThis is... I'll go with "dystopian". If you're not sure you can properly explain an idea, you should think about it more deeply.
- simianparrot 17 hours agoYou have to be joking
- dhosek 17 hours agoMeh. Might as well encourage people to post links to search results then too.
- zby 16 hours agoI like when someone links to where he found the information.
- zby 16 hours ago
- pc86 14 hours ago
- sbrother 19 hours ago
- nosklo 10 minutes agoI asked Gemini, and Gemini said:
The Case for a New HN Guideline on AI-Generated Content
This is a timely discussion. While AI is an invaluable tool, the issue isn't using AI—it's using it to replace genuine engagement, leading to "low-signal" contributions. The Problem with Unfiltered AI Replies
Instead of an outright ban, which punishes useful use cases, a new guideline should focus on human value-add and presentation.Dilution of Human Insight: HN's core value is the unique, experienced human perspective. Unanalyzed LLM-dumps replace original thought with aggregated, generic consensus. Reading Fatigue & Bloat: Long, copy-pasted blocks of LLM text break the flow of discussion and make comments less scannable, forcing users to sift through machine-generated prose to find human analysis. Lack of Authority/Verification: A comment that just says "$AI said X" is essentially an anonymous opinion. It lacks the critical filter, context, and experience of the human poster, making it less trustworthy, especially given LLM hallucination risk. The Value of AI as a Tool Quick Context/Summary: LLMs can quickly provide neutral, accurate definitions, historical context, or a list of arguments, saving users a separate search. Supporting Evidence: When used properly, AI output can be supporting "data" for a human's core argument or analysis. A Proposed Middle Ground GuidelineThe spirit of the guideline should be: If you use an LLM, your contribution must be more than the LLM's output.
Ultimately, the community downvotes already function to filter low-effort posts, but a clear guideline would efficiently communicate the shared norm: AI is a tool for the human conversation, not a replacement for it.Mandatory Analysis: The commenter must add their own critical analysis, personal experience, or counter-argument that contextualizes, critiques, or supports the AI's summary. Clear Attribution and Formatting: All LLM-generated text must be clearly attributed (e.g., "I asked ChatGPT-4...") and visually separated (e.g., using a > blockquote) to maintain scannability. Curation over Dumping: Encourage summarizing or excerpting the most relevant parts of the AI output, rather than pasting a lengthy, unedited wall of text. - AdamH12113 19 hours agoTo me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.
I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.
- m-hodges 18 hours agoI feel like this won't eliminate AI-generated replies, it'll just eliminate disclosing that the replies are AI-generated.
- danielbln 16 hours agoPersonally, I don't care about AI generated replies, but if it's a wall of text of unedited first-try output, then it's just slop, and that's just rude. If I don't notice it's AI because it's succinct and without GPTisms, well, I don't love it either, but neither will I recognize it nor will it therefore bother me.
- venturecruelty 11 hours agoGood. The more we can shame this behavior into the shadows, the better. Eventually, people will stop altogether.
- danielbln 16 hours ago
- neom 17 hours agoNot addressing your question directly, but when I got flagged last year I emailed Dan and this was the reply: " John Edgar <je@h4x.club> Sat, Jul 15, 2023, 8:08 AM to Hacker
https://news.ycombinator.com/item?id=36735275
Just curious if chatGPT is actually formally banned on HN?
Hacker News <hn@ycombinator.com> Sat, Jul 15, 2023, 4:12 PM to me
Yes, they're banned. I don't know about "formally" because that word can mean different things and a lot of the practice of HN is informal. But we've definitely never allowed bots or generated comments. Here are some old posts referring to that.
dang
https://news.ycombinator.com/item?id=35984470 (May 2023) https://news.ycombinator.com/item?id=35869698 (May 2023) https://news.ycombinator.com/item?id=35210503 (March 2023) https://news.ycombinator.com/item?id=35206303 (March 2023) https://news.ycombinator.com/item?id=33950747 (Dec 2022) https://news.ycombinator.com/item?id=33911426 (Dec 2022) https://news.ycombinator.com/item?id=32571890 (Aug 2022) https://news.ycombinator.com/item?id=27558392 (June 2021) https://news.ycombinator.com/item?id=26693590 (April 2021) https://news.ycombinator.com/item?id=22744611 (April 2020) https://news.ycombinator.com/item?id=22427782 (Feb 2020) https://news.ycombinator.com/item?id=21774797 (Dec 2019) https://news.ycombinator.com/item?id=19325914 (March 2019)"
(Edit: oh, it's not 2024 anymore. How time flies!)
- michaelcampbell 19 hours agoRelated: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.
- whimsicalism 19 hours agoDisagree, find these comments valuable - especially if they are about an article that I was about to read. It's not the same as sockpuppeting accusations, which I think are right to be banned.
- sfink 16 hours agoYeah, I haven't used AIs enough to be that good at immediately spotting generated output, so I appreciate the chance to reconsider my assumption that something was human-written. I'm sure people who did NOT use an AI find it insulting to be so accused, but I'd rather normalize those accusations and shift the norm to see them as suspicions rather than accusations.
I do find it more helpful when people specify why they think something was AI-generated. Especially since people are often wrong (fwict).
- duskwuff 17 hours agoYes. Especially on articles - the baseline assumption is that most articles are written by humans, and it's nice to know when that expectation may have been violated.
- sfink 16 hours ago
- sbrother 19 hours agoFair, but then that functionality should be built into the flagging system. Obvious AI comments (worse, ones that are commercially driven) are a cancer that's breaking online discussion forums.
- criddell 17 hours agoI think Slashdot still has the best moderating system. Being able to flag a comment as insightful, funny, offtopic, redundant, etc... adds a lot of information and gives more control to readers over the types, quantity, and quality of discussion they see.
For example, some people seem to be irritated by jokes and being able to ignore +5 funny comments might be something they want.
- criddell 17 hours ago
- yodon 19 hours ago> Comments saying "this feels like AI" should be banned.
Strong agree.
If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.
If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.
The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.
It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.
- D13Fd 18 hours agoI strongly disagree. I like the social pressure against people posting comments that feel like AI (e.g., that add a lot of text and little non-BS substance). I also like the reminder to view suspicious comments and media through that lens.
- notahacker 16 hours ago"Does anyone else hate those s̶c̶r̶o̶l̶l̶b̶a̶r̶s̶ ads/modals/unconventional page layout" is the archetypical HN response tbf, and often the most upvoted
Also, I'm pretty sure most people can spot blogspam full of glaringly obvious cliche AI patterns without being able to create a high reliability AI detector. To set that as the threshold for commentary on whether an article might have been generated is akin to arguing that people shouldn't question the accuracy of a claim unless they've built an oracle or cracked lie detection.
- D13Fd 18 hours ago
- ruuda 19 hours agoI find them helpful. It happens semi-regularly now that I read something that was upvoted, but after a few sentences I think "hmm, something feels off", and after the first two paragraphs I suspect it's AI slop. Then I go to the comments, and it turns out others noticed too. Sometimes I worry that I'm becoming too paranoid in a world where human-written content feels increasingly rare, and it's good to know it's not me going crazy.
In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.
- 8organicbits 18 hours agoOne of my recent blog posts got a comment like that, and I tried to reframe it as "this is poorly written", and took the opportunity to solicit constructive criticism and to reflect on my style. I think my latest post improved, and I'm glad I adjusted my style.
- Marsymars 17 hours agoNot that you shouldn't self-reflect, but some people's style is going to be similar to the default GPT voice incidentally, and unfortunately for them.
GPT has ruined my enjoyment of using em dashes, for instance.
- dpifke 17 hours agoI recently logged onto LinkedIn for the first time in a while, and found an old job posting from when I was hiring at a startup ~2 decades ago. It's amazing how much it sounds like LLM output—I would have absolutely flagged it as AI-generated if I saw it today.
- dpifke 17 hours ago
- whimsicalism 18 hours agoi think some people get excited by the notion of identifying AI content so start doing so without knowing how. truly nothing about your post reads like an LLM generation, it has a very non-LLM 'voice'
- Marsymars 17 hours ago
- djeastm 18 hours agoI disagree. Traditional netiquette when downvoting something is to explain why.
- Analemma_ 17 hours agoStrong disagree: these comments (if they lay out their case persuasively) allow me to skip the content completely, and save me a lot of time. They provide lots of value, and in fact there should be social rewards for the work of wading through value-free slop to save others from having to do so.
- whimsicalism 19 hours ago
- TRiG_Ireland 17 hours agoAs Tom Scott has said, people telling you what AI told them is worse than people describing their dreams. It definitely does not usefully contribute to the conversation.
Small exception if the user is actually talking about AI, and quoting some AI output to illustrate their point, in which case the AI output should be a very small section of the post as a whole.
- ManlyBread 19 hours agoI think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.
- Kim_Bruning 11 hours agoPossibly a distinction needs to be made between raw llm output, raw google output (like lmgtfy), or any other tool's raw output on the one hand, and a synthesis of your conclusions after having used these tools together, on the other.
Obviously cut&pasting the raw output of a google search or pubmed search or etc would be silly. Same goes for AI generated summaries and such. But references you find this way can certainly be useful.
And using spelling checkers, grammar checkers, style checkers, translation tools or etc (old fashioned or new AI-enhanced) should be ok if used wisely.
- Aachen 18 hours agoYou're like the 9th out of the 10 top-level replies I've read so far that says this, with the 10th one saying it in a different way (without suggesting they could have asked it themselves). What I find interesting is that everyone agrees and nobody argues about prompt engineering, as in, nobody says it's helpful that a skilled querier shares responses from the system. Apparently there's now the sentiment that literally anybody else could have done the same without thought
Whether prompt engineering is a skill is perhaps a different topic. I just found this meta statistic in this thread interesting to observe
- alwa 17 hours agoI do think it would be useful to normalize pasting a link to the full transcript if you’re going to quote an LLM. Both because I do find it useful to examine others’ prompting techniques, and because that gives me the context to gauge the response’s credibility.
What did we used to call it? Google-fu?
- ManlyBread 12 hours agoThis is probably the first time I see the term "prompt engineer" mentioned this year. I though that this joke has ran its' course back in 2023 and is largely forgotten nowadays.
- alwa 4 hours agoA silly name, but I’ve definitely watched peers coax sensible results out of braggadocious LLMs… and also watched friends say “make me an app that enters the TPS report data for me” (or “make fully playable Grand Theft Auto, but on Mars”) and be surprised that the result is trash.
- alwa 4 hours ago
- alwa 17 hours ago
- Kim_Bruning 11 hours ago
- incanus77 19 hours agoYes. This is the modern equivalent of “I searched the web and this is what it said”. If I could do the same thing and have the same results, you’re not adding any value.
Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.
I’d rather hear it filtered through a brain, be it a good answer or bad.
- chemotaxis 19 hours agoThis wouldn't ban the behavior, just the disclosure of it.
- AlwaysRock 19 hours agoI guess... That is the point in my opinion.
If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"
But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.
This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.
- Kim_Bruning 12 hours agoTaking ownership isn't the worst instinct, to be fair. But that's a slightly different formulation.
"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"
- Kim_Bruning 12 hours ago
- xivzgrev 19 hours agoAgreed - in fact these folks are going out of their way to be transparent about it. It's much easier to just take credit for a "smart" answer
- muwtyhg 18 hours agoSo those folks must be doing it because they think it's helpful, right? They are explicitly trying not to take credit for the words. Do you think, after a ban of these kinds of posts are implemented, that those posters would start hiding their use of AI to create replies, or would they just stop using AI to reply at all?
- muwtyhg 18 hours ago
- sfink 16 hours agoThat was my immediate thought too, but I'm still in favor of banning it in order to make it a community norm. Right now, people generally seem to think that such comments are adding some sort of signal, and I don't think they're stupid to think that. Not stupid, just wrong. And people feel personally attacked and so get defensive and harden their position, so it would be better to just make it against the guidelines with some justification there rather than trying to control it with individual arguments (with a defensive person!) or downvoting alone. (And the guidelines would be the place to put the explanation of why it's disallowed.)
People will still do it, but now they're doing it intentionally in a context where they know it's against the guidelines, which is a whole different situation. Staying up late to argue the point (and thus add noise) is obviously not going to work.
I'd prefer the guideline to allow machine translation, though, even when done with a chatbot. If you are using a chatbot intentionally with the purpose of translating your thoughts, that's a very different comment than spewing out the output from a prompt about the topic. There's some gray area where they fuzz together, but in my experience they're still very different. (Even though the translated ones set off all the alarm bells in terms of style, formatting, and phrasing.)
- AlwaysRock 19 hours ago
- lproven 19 hours agoI endorse this. Please do take whatever measures are possible to discourage it, even if it won't stop people. It at least sends a message: this is not wanted, this is not helpful, this is not constructive.
- kreck 16 hours agoYes.
Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.
- abustamam 15 hours agoWe use AI heavily in our product and development flow. Sometimes we'd encounter a problem none of us can figure out at the moment so some of us would use ChatGPT to brainstorm some solutions. We'd present the solutions, poke holes into them, and go forward from there. Sometimes we don't use the actual ideas from GPT but ideas that were inspired by the generated ideas.
The intent isn't to shift accountability, it's to brainstorm. A shitty idea gets shot down quickly, whereas a good idea gets implemented.
Edit: sentence
- abustamam 15 hours ago
- TulliusCicero 18 hours agoI'd like it to be forbidden, yes.
Sure, I'll occasionally ask an LLM about something if the info is easy to verify after, but I wouldn't like comments here that were just copy-pastes of the Google search results page either.
- snayan 17 hours agoI would say it depends, from your examples:
1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.
2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.
3) borderline. Same as 1 mostly.
The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.
- TheAceOfHearts 13 hours agoI think you shouldn't launder LLM output as your own, but in AI model discussion and new release threads it can be useful to highlight examples of outputs from LLMs. The framing and usage is a key element: I'm interested in what kinds of things people are trying. Using LLM output as a substitute for engagement isn't interesting, but combining a bunch of responses to highlight differences between models could be interesting.
I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.
- amatecha 17 hours agoYes. If I wanted an LLM-generated response I'd submit my own query to such a service. I never want to see LLM-generated content on HN.
- ThrowawayR2 14 hours agoThat's been discussed previously in https://news.ycombinator.com/item?id=33945628 and dang said in the topmost comment: "They're already banned—HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there. We don't want canned responses from humans either! ...". There more to his comment if you're interested.
The HN guidelines haven't yet been updated but perhaps if enough people send an email to the moderators, they'll do it.
- cwmoore 15 hours agoYes. Embarrassing cringe, whether or not it is noted.
But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.
As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.
Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?
Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”
I have had healthy suspicions for a while now.
- krick 12 hours agoSure, everyone wants to "stop silly people replying to my comments by posting LLM-generated garbage", but rules are rules, so you should understand that by introducing a rule like the one you propose, you also automatically forbid discussions about "here's a weird trick to make LLM make stupid mistakes", or "biases of different LLMs" where people reply to each other which prompts they tried and what was the result. Obviously, that's not what you've meant (right?), and everyone understands that, so then it's a judgement call when this applies and when it doesn't, and, congratulations, you've made another stupid rule that no one follows "and that's ok".
"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.
- kldg 8 hours agoThis website has guidelines? I have a friend who vomits out what an AI's response to something is to current events; drives me mad, but I resist being a dick about it (to their face).
I don't recall any instances where I've run into the problem here, maybe because I tend to arrive to threads as a result of them being popular (listed on Google News) which means I'm only going to read the top 10-50 posts. I read human responses for a bit before deciding if I should continue reading, and that's the same system I use for LLMs because sometimes I can't tell just by the formatting; if it's good, it's good - if it's bad, it's bad -- I don't care if a chicken with syphilis wrote it.
- gruez 19 hours agoWhat do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?
- everdrive 19 hours agoIt depends on if you're saying "Infowars has the answer, check out this article" vs "I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."
- gruez 18 hours ago>I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."
You can make the same argument for AI output as well, but to be clear, I'm referring to the case of someone bringing up a low quality source as the answer.
- everdrive 18 hours agoDefinitely agreed, I think the exact same would apply -- if there's an insightful conversation to be had about LLMs or their responses, then I think we'd all welcome it. If it's just someone saying "I asked the LLM and it said X" then we're better off without it.
Not sure how easy that would actually be to moderate, of course.
- everdrive 18 hours ago
- gruez 18 hours ago
- newsoftheday 19 hours agoYour point is conflating a potential low quality source with AI output while also making the judgement that <fill in the blank site> is a low quality source and to be disregarded 100% of the time; ignoring that the potential exists that an informative POV may be present, even on a potential low quality source site.
- sebastiennight 17 hours agoHave you seen this happen in the wild, ever?
I have not encountered a single instance of this ever since I've started using HN (and can't find one using the site search either) whereas the "I asked ChatGPT" zombie answers are rampant.
- Aachen 17 hours agoIf you plagiarise text from a source that is objectively (measurably, systematically) unreliable, without vetting, adding commentary, or doing anything else to add value, then 100% yes that's the same issue
- everdrive 19 hours ago
- appreciatorBus 6 hours agoYes.
Copying and pasting from chatGPT is no more contributing to discussion than it would be if you pasted the question into Google and submitted the result.
Everyone here knows how to look up an answer in Google. Everyone here knows how to look up an answer in ChatGPT.
If anyone wanted a Google result for a chat, GPT result, they would have just done that.
- a_wild_dandan 19 hours agoNo. I like being able to ignore them. I can’t do that if people chop off their disclaimers to avoid comment removal.
- Looveh 5 hours agoI wrote a short piece on the topic a while back, if anybody's interested.
- sans_souse 19 hours agoThere be a thing called Thee Undocumented Rules of HN, aka etiquette, in which states - and I quote: "Thou shall not post AI generated replies"
I can't locate them, but I'm sure they exist...
- tastyfreeze 19 hours agoI've seen that document. It also has a rule that states "Thou shall not be a bot."
Unfortunately, I can't find them. Its a shame. Everybody should read them.
- warkdarrior 18 hours agoIt's a great doc, I've been training my HN bot on it.
- warkdarrior 18 hours ago
- tastyfreeze 19 hours ago
- tekacs 18 hours agoBased on other HN rules thus far, I tend to think that this just results in more comments pointing out that you're violating a rule.
In many threads, those comments can be just as annoying and distracting as the ones being replied to.
I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.
- zoomablemind 19 hours agoThere's hardly a standard for a 'quality' contribution to discussion. Many styles, many opinions, many ways to react and support one's statements.
If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.
With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.
Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.
- wswope 19 hours ago> There's hardly a standard for a 'quality' contribution to discussion. Many styles, many opinions, many ways to react and support one's statements.
My brain-based LLM would like you to know there’s a set of standard guidelines for contribution linked on the footer of this page.
- wswope 19 hours ago
- clearleaf 13 hours agoI don't see the point of publishing any AI generated content. If I want AIs opinion on something I can ask it. If I want an AI image I can generate it. I've never found it helpful to have someone else's ai output lying around.
- AlwaysRock 19 hours agoYes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".
I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?
At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
- dylan604 19 hours ago> At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
then write their own response using an AI to improve the quality of the response? the implication here is that an AI user is going to do some research when using the AI was their research. to do the "fact check" as you suggest would mean doing actual work, and clearly that's not something the user is up for indicated by use of the AI.
so, to me, your suggestion is fantasy level thinking
- Arainach 19 hours agoI keep this link handy to send to such coworkers/people:
https://distantprovince.by/posts/its-rude-to-show-ai-output-...
- exasperaited 19 hours agoI have a client who does this — pastes it into text messages! as if it will help me solve the problem they are asking me to solve — and I'm like "that's great I won't be reading it". You have to push back.
- dylan604 19 hours ago
- BiraIgnacio 13 hours agoHN (and the community here) has a great system for surfacing the most useful information and therefore, pushing the not so good one away.
So no, I don't think forbidding anything helps. Let things fall where they should, otherwise.
- unsignedint 15 hours agoI think the real litmus test should be whether the comment adds anything substantive to the conversation. If someone is outsourcing their ideas to AI, that’s a different situation from simply using AI to rephrase or tidy up their own thoughts—so long as they fully understand what they’re posting and stand behind it.
Saying "I asked AI" usually falls into the former category, unless the discussion is specifically about analyzing AI-generated responses.
People already post plenty of non-substantive comments regardless of whether AI is involved, so the focus should be on whether the remark contributes any meaningful value to the discourse, not on the tools used to prepare it.
- MetaWhirledPeas 16 hours ago> Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
This should be restated: Should people stop admitting to AI usage out of shame, and start pretending to be actual experts or doing research on their own when they really aren't?
Be careful what you wish for.
- yomismoaqui 19 hours agoI think disclosing the use the AI is better than hiding it. The alternative is people using it but not telling for fear of a ban.
- qustrolabe 18 hours agoHN have very primitive comments layout that gives too big of an focus to large responses and first most upvoted post with all its replies. I think just because of that it's better to do something about large responses with little value. I'd rather they just share conversation link
- shishy 19 hours agoPeople are probably copy pasting already without that disclosure :(
- coffeecat 19 hours agoI'm sure there are people who spend their time doing this, but I don't understand the motive. Doesn't one post in comment threads because one wishes to share their thoughts with other humans?
- D13Fd 18 hours agoThat's one reason why I post comments (and none of my comments are AI-generated). But I think some people cut-and-paste AI responses because they like winning upvotes and running up an upvote counter.
- D13Fd 18 hours ago
- coffeecat 19 hours ago
- rsynnott 19 hours agoThey should be forbidden _everywhere_. Absolutely obnoxious.
- RiverCrochet 18 hours agoYes. LLM copy/paste strongly indicates karma/vote farming, because if I wanted an LLM's output I could just go there myself.
Someone below mentions using it for translation and I think that's OK.
Idea: Prevent LLM copy/pasting by preempting it. Google and other things display LLM summaries of what you search for after you enter your search query, and that's frequently annoying.
So imagine the same on an HN post. In a clearly delineated and collapsible box underneath or beside the post. It is also annoying, but it also removes the incentive to run the question through an LLM and post the output, because it was already done.
- pseudocomposer 10 hours agoI think it has its place, say, summarizing a large legal document under discussion. That said, if part of what someone says involves citing AI, I’d rather they acknowledge AI as their source.
I think making it a “rule” just encourages people to use AI and not acknowledge its use.
- PeterStuer 19 hours agoFor better or worse, that ship has sailed. LLM's are now as omnipresent as websearch.
Some people will know how to use it in good taste, others will try to abuse it in bad taste.
It might not be universally agreed which is which in every case.
- collinmcnulty 18 hours agoI think the ship very much has not sailed on how different spaces treat LLM responses. Are LLMs something that you can use if you want, but are considered rude and banned from being blatantly posted without human ownership? “You can’t use an LLM” would be an impossible rule, but “You can use an LLM to write your response but you have to take responsibility for the output” is feasible.
- collinmcnulty 18 hours ago
- gAI 19 hours agoI asked AI, and it said yes.
- HPsquared 17 hours agoIt's nice that they warn others, though. Better to let them label it as such rather than banning the label. I'd rather it be simply frowned upon.
- suckler 15 hours agoAsking a chat bot a question and adding its answer to a public conversation defeats the purpose of the conversation. It's like telling someone to Google their question when your personal answer could have potentially been a lot more helpful than a Google search. If I wanted Grok I'd ask Grok, not the human I chose to speak to instead.
- tveyben 14 hours agoIf one want’s an opinion from AI, one must ask AI - if on the other hand one want’s an opinion from a human being (those with a real brain thinking real thoughts etc.) then - hopefully - that’s what you get (and will keep getting) when visiting HN.
Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!
- lkt 15 hours agoNo because it allows you to set the bozo bit on them and completely disregard anything they say in the future
- LeoPanthera 19 hours agoBanning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.
- JohnFen 19 hours agoI find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.
But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.
- Scene_Cast2 19 hours agoThere is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".
- ben_w 19 hours agoUnfortunately it's easier to train an AI to be convincing than to be correct, so it can look insightful before it's true.
Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.
- D13Fd 18 hours ago> Unfortunately it's easier to train an AI to be convincing than to be correct, so it can look insightful before it's true.
This nails it. This is the fundamental problem with using AI material. You are outsourcing thinking in a way where the response is likely to look very correct without any actual logic or connection to truth.
- D13Fd 18 hours ago
- codechicago277 19 hours agoThe problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.
- ManlyBread 19 hours ago"Friciton" in this case is just plain old laziness and I don't think that it should be encouraged.
- WesolyKubeczek 19 hours agoThen state your understanding of what it said in your own words, maybe you’ll realize it’s bunk mid-sentence.
- darthwalsh 8 hours agoI'd rather you attribute your facts to an LLM vs. rephrase a hallucination that sounds right.
- darthwalsh 8 hours ago
- ben_w 19 hours ago
- Scene_Cast2 19 hours ago
- AnthonyMouse 15 hours agoWe should probably distinguish between posting AI responses in a discussion of AI vs. posting them in a discussion of something else.
If the discussion itself is about AI then what it produces is obviously relevant. If it's about something else, nobody needs you to copy and paste for them.
- wkat4242 7 hours agoForbidden, no. But discouraged unless it really adds something constructive to the discussion.
- BrtByte 17 hours agoMaybe a good middle ground would be: if you're referencing something an LLM said, make it part of your thinking...
- WhyOhWhyQ 13 hours agoI always state when I use AI because I view it to be deceptive otherwise. Since sometimes I'll be using AI when it seems appropriate, and certainly only in direct limited ways, this rule seems like it would force me to be dishonest.
For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."
Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.
- AnonC 19 hours agoAre you a new HN mod (with authority over the guidelines) and are asking for opinions from readers (that’d be new)? Or are you just another normal user and are loudly wondering about this so that mods get inputs (as opposed to writing a nice email to hn@ycombinator.com)?
I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.
- hermannj314 16 hours agoHN does not and has not ever valued human input, it has always valued substantive, clever, or interesting thought.
I am a human and more than half of what I write here is rejected.
I say bring on the AI. We are full of gatekeeping assholes, but we definitely have never cared if you have a heart (literally and figuratively).
- stego-tech 19 hours agoFormalizing it within the community rules removes ambiguity around intent or use, so yes, I do believe we should be barring AI-generated comments and stories from HN in general. At the very least, it adds another barometer of sorts to help community leaders do the hard work of managing this environment.
If you didn’t think it, and you didn’t write it, it doesn’t belong here.
- mindcandy 19 hours agoIs the content of the comment productive to the conversation? Upvote it.
Is the content of the comment counter-productive? Downvote it.
I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example
> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?
to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect
- novok 17 hours agoIMO you shouldn't put a large amount of quoted text, that is just annoying. You should link out at that point. I think if we ban people from citing sources, they will just stop citing sources and that is even worse. It's the new "I googled that for you" and that is fine IMO.
- swiftcoder 17 hours ago"I googled that for you" is generally deployed when you know better than the other person (and likely wish to rub it in their face that they should know better too).
I feel like the LLM equivalent here sort of demonstrates the exact opposite (I don't know enough about this topic to even doubt the accuracy of the machine...)
- novok 15 hours agoI often get to a point where it's faster to tell a person to just look at the output of the ai agent than to write it out myself.
"I googled that for you" can also be done from a position of ignorance too.
This is just a new thing that new cultural norms are developing from.
- novok 15 hours ago
- swiftcoder 17 hours ago
- newsoftheday 19 hours agoIf someone is going to post like that, I feel they should post their prompt ver batim, the exact AI and version used and the date they issued the prompt to receive the response they're posting.
There are far too many replies in this thread saying to drop the ban hammer, for this to be seriously taken as Hacker News. What has happened to this audience?
- HackeNewsFan234 14 hours agoI like the honesty aspect of it so that I can choose to (possibly) ignore the response. If they were forbidden and people posted the same $AI response without the disclaimer, I'd be more easily deceived.
- DinakarS 11 hours agoIt is fun to use AI however people can write on their own without having to copy-paste LLM content
2026 is great year to watch out for typos. Typos are real humans
- 827a 17 hours agoThis is a way of attributing where the comment is coming from, which is better than responding with what the AI says and not attributing it. I would support a guideline that discourages posting the output from AI systems, but ultimately there's no way to stop it.
- whitehexagon 16 hours ago>Personally, I'm on HN for the human conversation
Agreed. It's hard enough dealing with the endless stream of LLM marketing stories, please lets at least try to keep the comments a little free of this 'I asked...' marketing spam.
- Tiberium 19 hours agoI'm honestly grateful to those who disclose their use of AI in replies, because lately I've noticed more and more of clearly LLM-generated comments on HN with no disclaimers whatsoever. And the worst part is that most people don't notice and still engage with them.
- ynx0 14 hours agoYou can’t stop peope from using AI, but at least people are being transparent.
Doing this will lead to people using AI without mentioning it, making it even harder to parse between human-origin content.
- jopsen 13 hours agoThe "I asked $LLM about $X, and here is what $LLM said" pattern is probably most used to:
(A) Reticule the AI for giving a dumb answer.
(B) Point out how obvious something is.
- Projectiboga 16 hours agoWith the exception if careful language translation I would say yes. Otherwise follow the breadcrumbs and click through to the source and go from there, as far as search engine derived AI snippets go.
- alwa 17 hours agoI tend to trust the voting system to separate the wheat from the chaff. If I were to try and draw a line, though, I’d start at the foundation: leave room for things that add value, avoid contributions that don’t. I’d suggest that line might be somewhere like “please don’t quote LLMs directly unless you can identify the specific value you’re adding above and beyond.” Or “…unless you’re adding original context or using them in a way that’s somehow non-obvious.”
Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”
Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”
And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”
There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.
- zby 17 hours agoWhat is banned here? I can only find guidelines: https://news.ycombinator.com/newsguidelines.html not rules.
- ilc 19 hours agoNo, I put them with lmgtfy. You are being told that your question is easy to research and you didn't do the work, most of the time.
Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.
- bigstrat2003 19 hours ago"I asked AI and it said" is far worse than lmgtfy (which is already rude) because it has zero value as evidence. AI can be right, but it's wrong often enough that you can't actually use it to determine the truth of something.
- zepolen 18 hours agoHow is lmgtfy rude?
- zepolen 18 hours ago
- emaro 19 hours agoI don't think LLM responses mean a question is easy to research - they will always give an answer.
- watwut 19 hours ago1.) They are not replies to people asking questions.
2.) Posting AI response has as much value as posting random reddit comment.
3.) AI has value where you are able to factually verify it. If someone asks a question, they do not know the answer and are unable to validate ai.
- bigstrat2003 19 hours ago
- ycosynot 18 hours agoAs a brain is made of small pebbles, a LLM is made of small pebbles. If it wants to talk, let it be. I am arguing metaphysically. Not only did it evolve partially out of randomness (and so with a kind of value as an enlighted POV on existence), but it is still evolving to be human, and even more than human. I believe LLM should not be banned, "they" should be willfully, and cheerfully, included in the discourse.
I asked Perplexity, and Perplexity said: ""Your metaphysical intuition is very much in line with live debates: once “small pebbles” are arranged into agents that talk, coordinate, and co-shape our world, there is a strong philosophical case that they should be brought inside our moral and political conversations rather than excluded by fiat.""
- popalchemist 17 hours agoAbsolutely. Any of us could ask AI if we wanted to hear random unsubstantiated opinions. Why should that get in the way of what we all come here for, which is communication with humans?
- HeavyStorm 15 hours agoI think answers should be judged by content, not by the tool used to construct the answer.
Also, if you forbid people to tell you they consulted AI, they will just not say that.
- actionfromafar 13 hours agoAt least they can't paste a wall of text without it looking very weird.
- actionfromafar 13 hours ago
- jasomill 15 hours agoShort answer: Probably not outright forbidden — but discouraged or constrained — because “I asked AI…” posts usually add noise, not insight.
(source: ChatGPT)
- maerF0x0 17 hours agoI see it equivalently helpful to the folks who paste archive.is/ph links for paywalled content. It saves me time to do something I may have wanted to do regardless, and it's easy enough to fold if someone does post a wall of response.
IMO hiding such content is the job of an extension.
When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.
- amelius 17 hours agoCan't we have a unicode escape sequence for anything generated by AI?
Then we can just filter it at the browser level.
In fact why don't we have glyphs for it? Like special quote characters.
- nlawalker 19 hours agoNo, just upvote or downvote. I think the site guidelines could take a stance on it though, encouraging people to post human insights and discouraging comments that are effectively LLM output (regardless of whether they actually are).
- 16 hours ago
- GaryBluto 15 hours agoIt's a tad rich to be on HN for such a small amount of time and already be trying to sway the rules to what you wish them to be.
- Zak 19 hours agoI don't think people should post the unfiltered output of an LLM as if it has value. If a question in a comment has a single correct answer that is so easily discoverable, I might downvote the comment instead.
I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.
- sebastiennight 17 hours agoMost comments I've seen are comparing this behavior to "I googled it and..." but I think this misses the point.
Someone once put it as, "sharing your LLM conversations with others is as interesting to them as narrating the details of your dreams", which I find eerily accurate.
We are here in this human space in the pursuit of learning, edification, debate, and (hopefully) truth.
There is a qualitative difference between the unreliability of pseudonymous humans here vs the unreliability of LLM output.
And it is the same qualitative difference that makes it interesting to have some random poster share their (potentially incorrect) factual understanding, and uninteresting if the same person said "look, I have no idea, but in a dream last night it seemed to me that..."
- Jimmc414 17 hours agoBanning, no. proper citations and disclosure, yes. Sometimes an AI response is noteworthy and it is the point of the post.
- alienbaby 16 hours agoThis will be fine, until You can't tell the difference and people forgo the 'i asked' part.
- monknomo 18 hours agoI do not think so. If I wanted an ai's opinion, I'd ask the ai.
Should we allow 'let me google that for you' responses?
- sodapopcan 18 hours agoThat and replies that start with "No"
- Aachen 17 hours agoNow I'm curious what kinds of comments you mean
- sodapopcan 15 hours agohaha, you've never seen any? It's pretty classic HN. If someone thinks a comment is false the reply will look something like: "No. What you really meant to say is..." and often times the "No." will be its own paragraph.
- sodapopcan 15 hours ago
- Aachen 17 hours ago
- uhfraid 15 hours agoIMO, I don’t think they add any value to HN discussions
It’s the HN equivalent to “@grok is this true?”, but worse
- pyuser583 9 hours agoI recently asked an AI about a very important topic in current events. It gave me a shocking answer, which initially assumed was wrong - but seems correct.
The question was something like: “how reliable is the science behind misinformation.” And it said something like: “quality level is very poor and far below what justifies current public discourse.”
I ask for a specific article backing this up, and it’s saying “there isn’t any one article, I just analyzed the existing literature and it stinks.”
This matters quite a bit. X - formerly Twitter - is being fined for refusing to make its data available for misinformation research.
I’m trying to get it to give me a non-AI source, but it’s saying it doesn’t exist.
If this is true - it’s pretty important- and something worth discussing. But it doesn’t seem supportable outside the context of “my AI said.”
- bryanlarsen 19 hours agoWhat is annoying about them is that they tend to be long with a low signal/noise ratio. I'd be fine with a comment saying. "I think the ChatGPT answer is informative: [link]". It'd still likely get downvoted to the bottom of the discussion, where it likely belongs.
- alienbaby 16 hours agoThis will be fine, until you can't tell the difference and they forgo the 'i asked'
- skobes 19 hours agoI hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.
- 19 hours ago
- bawolff 17 hours agoI would consider that an https://xkcd.com/810/ situation.
My objection to AI comments is not that they are AI per se, but they are noise. If people are sneaky enough that they start making valuable AI comments, well that is great.
- llm_nerd 19 hours agoI think people are just presuming that others are regurgitating AI pablum regardless.
People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.
And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.
I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.
- nottorp 19 hours agoThing is, the comments that sound "AI" generated but aren't have about as much value as the ones that really are.
Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.
But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.
- llm_nerd 17 hours ago>the comments that sound "AI" generated but aren't have about as much value as the ones that really are
When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.
People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.
- llm_nerd 17 hours ago
- nottorp 19 hours ago
- 19 hours ago
- razingeden 15 hours agoI only get upset about it when the AI didn’t read the article either.
- Havoc 16 hours agoI'd say it's annoying & low value but doesn't quite warrant a ban per se.
Plus if you ban it people will just remove the "AI said" part, post it as is without reading and now you're engaging with an AI without even the courtesy of knowing. That seems even worse
- phoe-krk 18 hours agoYes. I'd prefer comments that have intent, not just high statistical probability.
- Aloha 18 hours agoI also endorse this - maybe not outright ban but at least highly discourage.
- steveBK123 17 hours agoBan it. It is the "let me google that for you" of the 2020s
- jpease 19 hours agoI asked AI if “I asked AI, and it said” replies should be forbidden, and it said…
- markus_zhang 17 hours agoIt’s fine as long as ppls took effort to double check the answers.
- lonelyasacloud 16 hours agoTL;DR; Until we are sure we have the moderation systems to assist surfacing the good stuff I would be in favour of temporary guidelines to maintain quality.
Longer ...
I am here for the interesting conversations and polite debate.
In principle I have no issues with either citing AI responses in much the same way we do with any other source. Or with individual's prompting AI's to generate interesting responses on their behalf. When done well I believe it can improve discourse.
Practically though, we know that the volume of content AI's can generate tends to overwhelm human based moderation and review systems. I like the signal to noise ratio as it is; so from my pov I'd be in favour of a cautious approach with a temporary guidelines against it's usage until we are sure we have the moderation tools to preserve that quality.
- bloppe 16 hours agoI don't think this needs to be banned, particularly because it wouldn't be very effective (people would just get rid of the "AI said" part), and also because anybody who actually writes a comment like that would probably get downvoted out of the conversation anyway.
Why introduce an unnecessary and ineffective regulation.
- 19 hours ago
- bawolff 17 hours agoYes, they are almost always low value comments.
- seizethecheese 16 hours agoNo, because this is self correcting behavior. If the comments are annoying, people will downvote them. In the rare case this is appropriate, they should be allowed. Guidelines are for things that will naturally be upvoted but shouldn’t be.
- theLegionWithin 18 hours agohow are you going to enforce it? if someone does that & reformats the text a bit it'll look like a unique response
- stevenalowe 16 hours agoI think the AIs should post directly
- djoldman 17 hours agoThe current upvote/downvote mechanism seems more than adequate to address these concerns.
If someone thinks an "I asked $AI, and it said" comment is bad, then they can downvote it.
As an aside, at times it may be insightful or curious to see what an AI actually says...
- insane_dreamer 13 hours agoI think AI can be useful to cite in comments as a source of information. I.e., where you might otherwise say "According to Bloomberg, CPI is up 5% in the past 6 months[0]" with [0] linking to a page where you got that info, you could have "According to Claude/GPT/Gemini, CPI is up 5% in the past 6 months" ideally with [0] being the prompt used.
- mattnewton 17 hours ago1 and 3, straight to jail. 2 is fine
- iambateman 15 hours agoWe should prefer shaming and humiliation over forbiddance––norms beat laws in such situations.
Of course I prefer to read the thoughts of an actual human on here, but I don't think it makes sense to update the guidelines. Eventually the guidelines would get so long and tedious that no one would pay attention to them and they'd stop working altogether.
(did I include the non-word forbiddance to emphasize the point that a human––not a robot––wrote this comment? Yes, yes I did.)
- ahmadtbk 16 hours agoai writing should be rewritten or polished at least as a form of respect for others.
- 16 hours ago
- breckinloggins 19 hours agoIf it’s part of an otherwise coherent post making a larger point I have no issue with it.
If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.
- Akronymus 17 hours agoI'd love to say yes, but it's basically unenforcable if the comment doesnt disclose it itself.
- submeta 17 hours agoWhile I agree that we should be genuinely engaging with each other on this platform, trying to disallow all AI generated content reminds me of the naysayers when it comes to letting llms write code.
Yes, if you wanted to ask an llm, you’d do so, but someone else asks a specific question to the llm, and generates an answer that’s specific to his question. And that might add value to the discussion.
- trenchgun 17 hours agoPermaban from first strike
- myst 18 hours agoI remember times when this sentiment was being expressed about “According to Wikipedia...” As much as I am pro implementing this rule, I’m afraid we are losing this fight.
- mx7zysuj4xew 16 hours agoYes, ounequivocally yes
- erelong 9 hours agoNo:
I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):
1. A unique human made prompt
2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).
3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.
Other points:
A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".
B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)
So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).
edit: (100% human post btw!)
- LocalH 11 hours agoAbsolutely, in cases where it's clear that the person asked a chatbot and copypasted the direct LLM response without editing. I usually downvote those (if it's within the window to do so), and flag them.
- testdelacc1 19 hours agoMaybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.
I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.
Of course, if it’s banned maybe people just stop admitting it.
- Gud 15 hours agoAbsolutely.
I am blown away by LLMs - now using ChatGPT to help me write some python scripts in seconds, minutes, that used to take me hours, weeks.
Yet, when I ask a question, or wish to discuss something on here, I do it because I want input from another meatbag in the hacker news collective.
I don’t want some corporate BS.
- jdoliner 19 hours agoI've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.
So I think maybe the guidelines should say something like:
HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.
---------
Also I asked ChatGPT and it said:
Short Answer
HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.
A rule probably isn’t needed. A norm is.
- nailer 11 hours agoYes. We should also ban citing Wikipedia. If you don't know, Wikipedia itself recommends you don't cite it- quote the sources wikipedia or your AI use, don't quote secondary sources.
- srcreigh 17 hours agoThe guidelines are just fine as they are.
Low effort LLM crap is bad.
Flame bait uncurious mob pile-ons (this thread) are also bad.
Use the downvote button.
- venturecruelty 12 hours agoWell yes, of course, but that might interfere with the pump, so unfortunately, you will kindly be asked to report to your nearest re-education center for correction. Thank you for understanding.
- MBCook 19 hours agoYes, please. It’s extremely low effort. If you’re not adding anything of value (typing into another window and copying and pasting the output are not) then it serves no purpose.
It’s the same as “this” of “wut” but much longer.
If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.
- FromOmelas 18 hours agorather then ban, I would prefer posts/comments are labeled as such.
with features:
- ability to hide AI labeled replies (by default)
- assign lower weight when appropriate
- if a user is suspected to be AI-generated, retroactively label all their replies as "suspected AI"
- in addition to downvote/upvote, a "I think this is AI" counter
- thenaturalist 12 hours agoYes.
Thank you for your attention on this matte.r
- XorNot 14 hours agoYes. Unambiguously. I want this exact behavior to lead to social ostracism everywhere.
Edit: I'm happy to add two related categories to that too - telling someone to "ask ChatGPT" or "Google it" is a similar level offense.
- whimsicalism 19 hours agoI think comments like this should link their generation rather than C+P it. Not sure if this should be a rule or we can just let downvoting do the work - I worry that a rule would be overapplied and I think there are contexts that are okay.
- kylehotchkiss 17 hours agoIt's karma fishing, so yes, please ban it. While we're at it, just automatically add the archive.is link to any news article or don't allow voting on those comments ¯\_(ツ)_/¯
- lemper 7 hours agoyea, mate. definitely. if i want to hear / read some replies by machine, i'd ask them myself. the value of the this site is the lived experience and thought by the users.
- mvdtnz 15 hours agoAI generated content should be absolutely banned without question. This includes comments and submissions.
- shaftoe444 16 hours agoYes
- fortran77 16 hours agoSometimes AI gives such a surprising or unusual answer to a question that it's worth a discussion. I think it should be discouraged but not "forbidden".
- freejazz 16 hours agoYes.
- TZubiri 16 hours agoIn my experience with managing teams, you want to encourage and not forbid this because the alternative is people will use llms without telling, which is 100 times worse than disclosed LLM use.
- buellerbueller 16 hours agoI agree and think the solution is to get rid of the LLMs.
- prpl 19 hours agowere lmgtfy links ever forbidden?
- bakugo 17 hours agoWhile I do think such comments are pointless and almost never add anything to the discussion, I don't believe they're anywhere near as actively harmful as comments and (especially) submissions that are largely or entirely AI generated with no disclosure.
I've been seeing more and more of these on the front page lately.
- etchalon 18 hours agoYes.
- jeffbee 18 hours agoIf it was my personal site I would instantly ban all such accounts. They are basically virus-carrying individuals from outer space, here to destroy the discourse.
Since that isn't likely to happen, perhaps the community can develop a browser extension that calls attention to or suppresses such accounts.
- bbor 18 hours ago
In terms of reasons for platform-level censorship, "I have to scroll sometimes" seems like a bad one.large LLM-generated texts just get in the way of reading real text from real humans- muwtyhg 18 hours agoThis feels like an oversimplification of the issue. Why moderate at all? Spam posted here would only require you to "have to scroll sometimes".
- muwtyhg 18 hours ago
- satisfice 19 hours agoOnly if they also do a google search, provide the top one hundred hits, and paste in a relevant Wikipedia page.
- Mistletoe 19 hours agoI don’t see how it is much different than using Wikipedia. They are usually about the same answer and at least in Gemini it is usually a correct answer now.
- exasperaited 19 hours agoNo, don't ban it. It's a useful signal for value judgements.
- ahmadtbk 16 hours agoAI slop is very exhausting to understand. If it's well written maybe not. If its obviously AI then that should be flagged.
- 63stack 19 hours agoYes
- 19 hours ago
- 0x00cl 19 hours agoThis is what DeepSeek said:
> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.
- debo_ 19 hours agoI was hoping someone did this.
- debo_ 19 hours ago
- Kim_Bruning 16 hours agoBut what [if the llms generate] constructive and helpful comments? https://xkcd.com/810/
For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.
I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.
Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.
"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?
- cdelsolar 17 hours agonah
- ok123456 17 hours agoyes
- WithinReason 17 hours agoThat's what the upvote/downvote system is for.
- mistrial9 19 hours agothe system of long-lived nicks on YNews is intended to build a mild and flexible reputation system. This is valuable for complex topics, and to notice zealots, among other things. The feeling while reading that it is a community of peers is important.
AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.
This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.
- WesolyKubeczek 19 hours agoYes. If I wanted an LLM’s opinion, I would have asked it myself.
- newsoftheday 19 hours agoWould your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?
- nobody9999 13 hours ago>Would your prompt have been identical and produced identical results, today, tomorrow, which version of AI would you have used, were there bugs present that made the post or comment interesting that would have been absent in your response because the bug had been fixed already?
Why is that relevant to GP's point?
I can't speak for anyone else, but I come to HN to discuss stuff with other humans. If I wanted an LLM's (it's not AI, it's a predictive text algorithm) regurgitations, I can generate those myself and don't need "helpful" HNers to do it for me unasked.
When I come here I want to have a discussion with other sentient beings, not the gestalt of training data regurgitated by a bot.
Perhaps that makes me old-fashioned and/or bigoted against interacting with large language models, but that's what I want.
In discussion, I want to know what other sentient beings think, not an aggregation of text tokens based on their probability of being used in a particular sequence determined by the data fed to model.
The former can (but may well not be) a creative, intellectual act by a sentient being. The latter will never be so, as it's an aggregation of existing data/information as a sequence of tokens cobbled together based on the frequency with which such tokens are used in a particular order in the model's corpus.
That's not to say that LLM are useless. They are not. But their place is not in "curious conversation," IMNSHO.
- WesolyKubeczek 18 hours agoIn any case, it should have some more thought to it, some summary, some highlight, what you find useful/insightful about it. Just dumping the response is lazy and disrespectful.
And if two people can get two opposite results by giving the same prompt which asks a very specific question to the same model, it looks like bunk anyway. LLMs don't care if they are correct.
- nobody9999 13 hours ago
- newsoftheday 19 hours ago
- pembrook 19 hours agoNo, this is not a good rule.
What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.
- zug_zug 17 hours agoIronic that your comment downvoted... gosh darn emotional humans
- zug_zug 17 hours ago
- tehwebguy 19 hours agoIt should be allowed and downvoted
- mrguyorama 17 hours agoUmm, just to be clear;
HN is not actually a democracy. The rules are not voted on. They are set by the people who own and run HN.
Please tell me what you think those people think of this question.
- syockit 19 hours agoYou can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?
Obligatory xkcd https://xkcd.com/810/
- card_zero 19 hours ago15 years ago ... needs updating in regard of how things panned out.
- card_zero 19 hours ago
- legohead 18 hours agoLots of old man yelling at clouds energy in here.
This is new territory, you don't ban it, you adapt with it.
- slowmovintarget 11 hours agoWhile I'd certainly prefer raw human authorship on a forum like this, I can't help but think this is the wrong question. The labeling up front appears to be merely a disclosure style. That is, commenters say that as a way of notifying the reader that they used an LLM to arrive at the answer (at least here on HN) rather than citing the LLM as an authority.
"Banning" the comment syntax would merely ban the form of notification. People are going to look stuff up with an LLM. It's 2025; that's what we do instead of search these days. Just like we used to comment "Well Google says..." or "According to Alta Vista..."
Proscribing quoting an LLM is a losing proposition. Commenters will just omit disclosure.
I'd lean toward officially ignoring it, or alternatively ask that disclosure take on less conversational form. For example, use quote syntax and cite the LLM. e.g.:
> Blah blah slop slop slop
-- ChatGippity
- Razengan 13 hours agoFighting the zeitgeist never works out. The world's gonna move on whether you move on with it or not.
I for one would love to have summary executions for anyone who says that Hello-Fellow-Kids cringe pushed on us by middle-aged squares: "vibe"
- dominotw 19 hours agoi asked chatgpt and it said no its not a good idea to ban
- stocksinsmocks 14 hours agoI already assume everything, and I mean everything, that I read in any comment section, whether here or sewers like Reddit, X, or one of the many Twitter-but-Communist clones, is either:
1. Paid marketing (tech stacks, political hackery, Rust evangelism) 2. Some sociopath talking his own book 3. Someone who spouts off about things he doesn’t know about (see: this post’s author)
The internet of real people died decades ago and we can only wander in the polished megalithic ruins of that enlightened age.
- bjourne 15 hours agoYes, please. LLMs are poisoning all online conversions everywhere. It's an epidemic global plague.
- ben_w 19 hours agoDepends on the context.
I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.
Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360
- hooverd 17 hours agoYes please. I don't care if somebody did their own research via one, but it's just so low effort.
- renewiltord 19 hours ago[dead]
- ruined 19 hours agoyou got a downvote button
- leephillips 19 hours agoPosting this kind of slop should be a banning offense. Also: https://hn-ai.org/
- moomoo11 19 hours agoHonestly I judge people pretty harshly. I ask people a question in honest good faith. If they’re trying to help me out and genuinely care and use AI fine.
But most of the time it’s like they were bothered that I asked and copy paste what an AI said.
Pretty easy. Just add their name to my “GFY” list and move on in my life.
- createaccount99 18 hours agoForbidden? They should be mandatory.
- ekjhgkejhgk 19 hours agoI don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to spot them.
- cm2012 18 hours agoThese comments are in the top 10% of usefulness of all comments in those threads. Clear, legible information that is easy to read and relevant. Keep!