Hacker Newsnew | past | comments | ask | show | jobs | submit | dooglius's commentslogin

The fact that terms like Aho-Corasick, PLDI, Go, etc. are properly capitalized, including if they begin sentences, but otherwise sentences are uncapitalized, makes me think it's an explicit LLM instruction "don't capitalize the start of sentences" rather than writing style.

ChatGPT also loves Aho-Corasick and seems to overuse it as an optimization fall back idea. ChatGPT has suggested the algorithm to me but the code ended up slowing down a lot.

ChatGPT was heavily RL'd on competitive programming in 2025, and aho-corasick is a traditional algorithm in the competitive programming space.

No, this is just what that writing style looks like. Names and acronyms are usually capitalized normally.

I keep being surprised by the magnitude of the disconnect between this place and the other circles of hell. I'd have thought the Venn diagram would have a lot more overlap.


Oh the venn diagram might be big, the HN population just has a lot of variance I think, and is less of a community per se. I don't doubt what you're saying, though in the grand scheme of things, I think the "too lazy to hit shift" population dwarfs any of these groups.

Yeah, I can agree with the variance. Except that the "too lazy to hit shift" community is not something I would ever confuse with people writing long form articles about their regex engine research that they'll be presenting at PLDI.

The confusion might be understandable for people who have never encountered this style before, but that's still a very uncharitable take about an otherwise pretty interesting article.


What's with this silly "all lower case" style lately?

Jack Dorsey's layoff message last month did the same thing.

Is it some kind of "Prove you're not an AI by purposely writing like an idiot" or something?


not anti-capitalist, just a subtle preference away from capitalism

It looks like that's about syntactic ambiguity, whereas the parent is talking semantic ambiguity

Is z3 competitive in SAT competitions? My impression was that it is popular due to the theories, the python API, and the level of support from MSR.

Funnily, this was precisely the question I had after posting this (and the topic of an LLM disagreement discussed in another thread). Turns out not, but sibling comment is another confounding factor.

Do you have reason to believe that you have a reliable way in these cases of determining whether the comment is generated?

Having been reading generated comments almost daily for over three years now, I have a pretty good sense of it. There's a bunch of signals: how new the account is; how the comments look visually (the capitalization and layout of the paragraphs, particularly when all of one user's comments are displayed in a list). Em-dashes and short, emphatic sentences, make it more obvious of course.

There are cases that are more borderline; usually when someone has used a translation service or has used an LLM to polish up a comment they wrote themselves. For these ones there's less certainty, and whilst we discourage them, we're not as rigid in our aversion to them or as eager to ban accounts that do it.

But ones that are entirely generated are still pretty easy to spot, even just from visual appearance.


The vast majority of human evolution happened in non-humans

Sure - though the tuned behaviour around turning the innate immune system up and down is probably dominated by the more recent part of that long history.

Don't take this the wrong way, but it seems like you did not actually learn to cope


Cope, yes. Thrive, no. Surviving forty seven years alone at least counts as coping.


Let me put it this way, you do not seem to have learned to cope very well. Actually focusing on and learning to cope was a big improvement for me.


Can you elaborate on your hypothesis? Would them being "still there" imply the possibility of treatment to enable their effectiveness?


What exactly is hyper-skeptical about them?


How does WSL1 do it then?

Anyway, the section you are quoting makes no claim as the the permitted granularity.


Explained in the very first sentence, "This is the quarterly links ‘n’ updates post, a selection of things I’ve been reading and doing for the past few months."


I missed that sentence too. I guess the large heading starting with "(1)" drew my eyes first and felt like the natural place to start reading, while the short sentence or two above it in smaller text subconsciously felt skippable (and was skipped). Even when I went back to read the first sentence, I had to kind of force myself to read the stuff above the first large heading. How odd!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: