Hacker Newsnew | past | comments | ask | show | jobs | submit | metalliqaz's commentslogin

oh merely 5k qubits?

Much more physical qubits, even if that extrapolation were true, which is doubtful.

No it isn't. I really do not care what the LLM has to say. If a person has taken the (substantial) time necessary to fill the context with enough information that something interesting comes out, I would much rather they simply give me the inputs. The middleman is just digested Internet text. I've already got one of those on my end.

Related: https://blog.gpkb.org/posts/just-send-me-the-prompt/

(I could have sworn there was a popular HN submission a while back of this or a similar blog post, but damned if I can find it now.)



Yes, you're abs- I mean, that's the one.

wow, it's like that guy is in my head

That does somewhat depend on the size of the context.

LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.

This does all take effort.

My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.


> LLMs won't add information to context, so if the output is larger than the input then it's slop

That doesn't align with my observations. A lot of times they are able to add information to context. Sure it's information I could have added myself, but they save me the time. They also do a great job of taking relatively terse context and expanding upon it so that it is more accessible to someone who lacks context. Brevity is often preferable, but that doesn't mean larger input is necessarily slop.


This post seems to conflate using `throw`, `raise`, etc. with crashing. The idea that 'handling' an error does not involve `throw`/`catch`, `try`/`except` is very strange to me. The exception facility is often the most elegant way to check inputs, and if I remember correctly the Python documentation says as much as well.

Don't use OpenAI models unless you want your full history to someday be shared with the US Government.

This applies to any US company. Have we forgotten everything we learned in 2012? If your data is shared with Google, Anthropic, Meta, Amazon, or any of their US competitors, it is within reach of the NSA. Whether or not a company provides support to the DoW is orthogonal to that fact.

Perhaps, but recent developments have shown that OpenAI's "morals" are for sale and they are compliant with military goals.

Recent developments offer us evidence but did you really think this wouldn't be the case?

Altman is a consummate liar so no, I didn't.

Some companies are more evil than others. OpenAI is more evil than Anthropic.

Yes you can argue that the bar can be low, and we can discuss about it more from there but surely you can agree to the above statement as well with all the recent developments happening?


I think the distinction is pointless. If OpenAI and Anthropic both subscribe to a high baseline of evil, the difference in their principles or conduct isn't worth applauding. Anthropic's moral high-road means nothing contextualized with their Palantir partnership and preexisting DoD contract.

It all feels reminiscent of Google and Apple fighting over who had the more secure ecosystem, when they had both already assented to hidden surveillance measures. Neither Anthropic nor OpenAI can be trusted for anyone that has even the slightest fear of the US government lashing out against them.


Same with anthropic and everyone else?

I have tried this at various times. But, while some people are lovely, there are some that are miserable and there are plenty that are simply... uninterested. When you are slightly awkward, not particularly attractive, and not wealthy, you have to get used to people just turning away, avoiding eye contact, expressing mild cues of disgust, and so forth. It's really quite painful to try.

I like to point out that he was fired for egregious dishonesty.

Also when they start seeing real ads.

It started to get deployed: https://chatgpt.com/pricing/ it's called "ChatGPT Go"

    > This plan may include ads. Learn more 

    > When will ads be available in ChatGPT?
    We’re beginning in the US on February 9, 2026

    > Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad. 

You pay 8 USD / month and have higher limits and ads

Remember when everyone said Facebook would be dead if they started running ads

Facebook is dead

Might be this:

> Y Combinator is starting a blacklist for venture capitalists accused of sexual harassment: "'We don’t call it a blacklist, but that is essentially what is happening,” Kat Manalac, a partner at the influential start-up incubator Y Combinator, said of the blast email.

https://www.forbes.com/sites/lorenfeldman/2017/07/17/todays-...


but is it blazingly fast?

if its not in Rust or browser-based or a "cloud" service or the result of multi-GWH of LLM "training" or a VSCode plugin or ideally all of the prior then the HN kids wont be interested :-)

yeah that part is 100% BS

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: