> Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular.
I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.
My non-technical mother recently texted the family group chat to try to get us to use Signal. The winds are shifting towards privacy in a broader sense than ever before. This type of counter argument ("that doesn't sell [product") is usually a bad argument when the market doesn't offer anything that actually sells on privacy. It becomes a self-fulfilling prophecy.
As much as I wish, it is going the other way. Caring about the 3 requires literacy, which in the world of LLM, is one thing that going to be reduced as a whole for human-kind.
Nothing personally - Our customers send us highly sensitive financial documents to process. Using a foreign model to process their data (or even just for local testing) will most likely result in a u-turn.
What if you run them locally, or use a US-based provider that hosts them? IMO the provenance of the weights doesn't matter. You're right that the location of the hoster does, though.
No, it's not. They're just collections of numbers that can be harnessed to produce outputs. I check the outputs and if they're good I use them. If they're not, I ignore them and there's no harm done. Obviously I don't trust them to be accurate sources of information, but I don't trust American corporate LLMs much more.
No experience with 5 and not much with 4.7, but they both have quite a few advocates over on /r/localllama.
Unsloth's GLM-4.7-Flash-BF16.gguf is quite fast on the 6000, at around 100 t/s, but definitely not as smart as the Qwen 3.5 MoE or dense models of similar size. As far as I'm concerned Qwen 3.5 renders most other open models short of perhaps Kimi 2.5 obsolete for general queries, although other models are still said to be better for local agentic use. That, I haven't tried.
Have you tried Eat[0]? It's a reasonably fast terminal emulator that integrates with Eshell so that all commands run in Eshell have full terminal emulation (but they're still run in the original Eshell buffer, which makes it better than `eshell-visual-commands'). I haven't had any terminal emulation problems since switching to it.
With regards to completion, I use corfu, which gives me nice inline popups. I use the bash-completion package, so I don't have issues with programs that don't provide Eshell completions (which are basically all of them).
You have to turn on eat-eshell-mode to enable Eat's terminal emulation in eshell.
It runs full-fledged TUIs like vim and ncmpcpp in Eshell slowly, but is good enough for quick fzf uses. It's perfectly fine for "small" dynamic elements like the spinners and progress bars used by package managers.
Just remember to use system pipes (with "*|") instead of Elisp pipes (with "|") if you're piping data into an interactive TUI application like fzf in Eshell.
How does eat detect a visual command in eshell? I use vterm in Emacs for visual commands like nvim and htop. But it's triggered manually with a simple custom prefix command (just 'v') added to the actual command. I wonder if that trigger could be automated. It sounds from your description like vterm is faster than eat. If so, a similar automatic trigger for vterm could be very beneficial.
eat-eshell-mode doesn't detect visual commands and launch a separate eat buffer, like eshell-visual-commands do. It filters all process output in eshell and handles term codes. It turns the eshell buffer itself into a terminal, so that vim or whatever runs in eshell.
> It sounds from your description like vterm is faster than eat.
vterm is faster than eat, but a dedicated eat buffer is fast enough for most common TUIs. An eshell buffer with eat-eshell-mode is slower.
Visual commands only differs from normal commands by the escapes code they use (like enabling the alternate buffer, clearing the screen,..). Eshell can't deal with those (and shouldn't as it's a shell, not a terminal). Eat adds a layer that does process those escape codes and that's all you need to handle visual commands.
I've retrained muscle memory to use C-c C-l (which I rebind to `consult-history'). This gives me a fuzzy-searchable list of all my history. I find that I prefer this to a normal shell's C-r, because with my vertical completion setup I can see multiple matches for my search simultaneously.
Because the POTUS is the chief executive. His literal job is to manage the executive branch of the government. Unless his policies go against the law, there isn't anyone who can legally dispute his policies for the executive. And if he does something illegal, the Senate can impeach him.
This is exactly what I'm hoping for. When these tools get to a certain level I'd love to see a TV adaptation of Harry Potter and the Methods of Rationality.
reply