Hacker Newsnew | past | comments | ask | show | jobs | submit | MarsIronPI's commentslogin

They'll start minding when things start breaking. In the mean time I'll work on stuff AI is still not so great at.

Some of us need a paycheck and have to work on whatever LLM project the CEO demands an d if it fails the developer gets blamed.

> Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular.

I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.


It does to a certain audience: the people who care about privacy, security and freedom.

I suspect that as time goes on our numbers will only increase.


My non-technical mother recently texted the family group chat to try to get us to use Signal. The winds are shifting towards privacy in a broader sense than ever before. This type of counter argument ("that doesn't sell [product") is usually a bad argument when the market doesn't offer anything that actually sells on privacy. It becomes a self-fulfilling prophecy.

Hopefully there's only so many times Meta can suggest some creepy connection you didn't want made before people start valuing privacy

As much as I wish, it is going the other way. Caring about the 3 requires literacy, which in the world of LLM, is one thing that going to be reduced as a whole for human-kind.

Agreed graphene is the only reason I picked up a Pixel. Google phone would have been a no thanks from me otherwise.

I think you could look into Minstral. There's also GPT-OSS but I'm not sure how well it stacks up.

What's your problem with Chinese LLMs?


Nothing personally - Our customers send us highly sensitive financial documents to process. Using a foreign model to process their data (or even just for local testing) will most likely result in a u-turn.

What if you run them locally, or use a US-based provider that hosts them? IMO the provenance of the weights doesn't matter. You're right that the location of the hoster does, though.

it’s not obvious to you why someone would want to avoid models created by our enemies?

As an European, I trust China more than America. China doesn't just start bombing other countries and causes regime changes.

No, explain it to me. GPT-OSS is one of the most heavily-censored models on the internet, what's the point of buying local if it's crap?

No, it's not. They're just collections of numbers that can be harnessed to produce outputs. I check the outputs and if they're good I use them. If they're not, I ignore them and there's no harm done. Obviously I don't trust them to be accurate sources of information, but I don't trust American corporate LLMs much more.

I've had good experience with GLM-4.7 and GLM-5.0. How would you compare them with Qwen 3.5? (If you have any experience with them.)

No experience with 5 and not much with 4.7, but they both have quite a few advocates over on /r/localllama.

Unsloth's GLM-4.7-Flash-BF16.gguf is quite fast on the 6000, at around 100 t/s, but definitely not as smart as the Qwen 3.5 MoE or dense models of similar size. As far as I'm concerned Qwen 3.5 renders most other open models short of perhaps Kimi 2.5 obsolete for general queries, although other models are still said to be better for local agentic use. That, I haven't tried.


I assume you mean Iran rather than Iraq, but your point still stands.

Have you tried Eat[0]? It's a reasonably fast terminal emulator that integrates with Eshell so that all commands run in Eshell have full terminal emulation (but they're still run in the original Eshell buffer, which makes it better than `eshell-visual-commands'). I haven't had any terminal emulation problems since switching to it.

[0]: https://codeberg.org/akib/emacs-eat

With regards to completion, I use corfu, which gives me nice inline popups. I use the bash-completion package, so I don't have issues with programs that don't provide Eshell completions (which are basically all of them).


This is extremely helpful. I have never considered the possibility that there could be a better method to deal with emulation than visual commands.

You have no idea how much this helps me.


You have to turn on eat-eshell-mode to enable Eat's terminal emulation in eshell.

It runs full-fledged TUIs like vim and ncmpcpp in Eshell slowly, but is good enough for quick fzf uses. It's perfectly fine for "small" dynamic elements like the spinners and progress bars used by package managers.

Just remember to use system pipes (with "*|") instead of Elisp pipes (with "|") if you're piping data into an interactive TUI application like fzf in Eshell.


How does eat detect a visual command in eshell? I use vterm in Emacs for visual commands like nvim and htop. But it's triggered manually with a simple custom prefix command (just 'v') added to the actual command. I wonder if that trigger could be automated. It sounds from your description like vterm is faster than eat. If so, a similar automatic trigger for vterm could be very beneficial.

There's some miscommunication here.

> How does eat detect a visual command in eshell?

eat-eshell-mode doesn't detect visual commands and launch a separate eat buffer, like eshell-visual-commands do. It filters all process output in eshell and handles term codes. It turns the eshell buffer itself into a terminal, so that vim or whatever runs in eshell.

> It sounds from your description like vterm is faster than eat.

vterm is faster than eat, but a dedicated eat buffer is fast enough for most common TUIs. An eshell buffer with eat-eshell-mode is slower.


Visual commands only differs from normal commands by the escapes code they use (like enabling the alternate buffer, clearing the screen,..). Eshell can't deal with those (and shouldn't as it's a shell, not a terminal). Eat adds a layer that does process those escape codes and that's all you need to handle visual commands.

I've retrained muscle memory to use C-c C-l (which I rebind to `consult-history'). This gives me a fuzzy-searchable list of all my history. I find that I prefer this to a normal shell's C-r, because with my vertical completion setup I can see multiple matches for my search simultaneously.

Because the POTUS is the chief executive. His literal job is to manage the executive branch of the government. Unless his policies go against the law, there isn't anyone who can legally dispute his policies for the executive. And if he does something illegal, the Senate can impeach him.

At least, so goes the theory.


This is exactly what I'm hoping for. When these tools get to a certain level I'd love to see a TV adaptation of Harry Potter and the Methods of Rationality.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: