Hacker Newsnew | past | comments | ask | show | jobs | submit | bekantan's commentslogin

Still on XMonad, what are some good alternatives?


If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.


Any significant differences in charging experience? Can you use Tesla Supercharging network?


I6 has the 800V architecture, 20-80% in under 20m

Hyundai has an EV platform that is shared across a number of models. Sandy Munroe has a video on it and why he thinks it is a great idea


You can use the Tesla network, in some markets they are even giving out free NACS adapters for older cars that aren't already NACS.


AFAIK newer ones come with the NACS charger, older ones you have to buy the adapter


2025 Ioniq 5s have the NACS port (only in NA markets, obviously). I don't think the 2025 Ioniq 6s do, though they probably will for the next model year. Like you said, CCS-equipped cars can use the Tesla network with a NACS-to-CCS adapter.


The article is about Europe, which has standardised on CCS2 charging. Including since 2018, Tesla. None come with NACS.

So European Teslas and Hyundai's have used the same plug since then. AFAIK, many UK and EU Tesla superchargers are open to other cars. ( https://www.carwow.co.uk/editorial/going-electric/ev-chargin... )


They say that the NACs adapter will be free to existing customers, but have not made any details aside from "2025" available yet.



I really liked this talk as well. One part that I'm not sure I can fully agree with, though, is the idea of fully re-conceptualizing the self. It is possible to self-author, and partially change...but I have never heard of nor met anyone who just became a totally different person. I'm willing to concede that he may have been speaking hypothetically, or that maybe the idea of changing the self will be more accessible to AGI rather than for humanity.


That was a mind-expanding 45-minute talk. Than you for highlighting it.


Seems crazy. I identify with my agency, id est, self = agency + few memories.

This is saying the opposite?

I would guess it comes from not knowing what you want. If you don't go through the Uberman process, then you end up saying things like that.


This talk was one of the best so far, highly recommended.


They are among the highlights for me every year. Just the right amount of brain melting information density.


It would indeed be better to create appropriately sized storage.

However, I don't think that underlying array is resized every time `add` is called. I'd expect that resize will happen less than 30 times for 1M adds (capacity grows geometrically with a=10 and r=1.5)


I was in SF back in May, but didn't manage to get through the waitlist :(

It was so cool to see them diving around.

Interestingly, when I showed the clips to some of my senior family members, they didn't seem interested at all. I think they couldn't comprehend what was going on, even after I explained.

Their (several independent trials) reaction was similar to showing them some AI-generated image of something which clearly can't exist. It was so absurd that it was just filtered out with a comment "yeah, yeah - nice car".


Every quantum leap in technology looks that way. It's so unbelievable/magic that people can't even comprehend it.


A “productivity hack” for folks who can’t afford this and already own iPad+Pencil which they primarily use indoors: switch to grayscale mode, it is awesome :)


You can try it yourself on https://chat.lmsys.org (sus-column-r model)


> The output quality is not "ruined" at all.

That was my experience as well - 3-bit version is pretty good.

I also tried 2-bit version, which was disappointing.

However, there is a new 2-bit approach in the works[1] (merged yesterday) which performs surprisingly well for Mixtral 8x7B Instruct with 2.10 bits per weight (12.3 GB model size).

[1] https://github.com/ggerganov/llama.cpp/pull/4773


I could only run 2-bit q2 mode on my 32G M2 Pro. I was a little disappointed, but I look forward to try the new approach you linked. I just use Mistral’s and also a 3rd party hosting service for now.

After trying the various options for running locally, I have settled on just using Ollama - really convenient and easy, and the serve APIs let me use various LLMs in several different (mostly Lisp) programming languages.

With excellent resources from Hugging Face, tool providers, etc., I hope that the user facing interface for running LLMs is simplified even further: enter your hardware specs and get available models filtered by what runs on a user’s setup. Really, we are close to being there.

Off topic: I hope I don’t sound too lazy, but I am retired (in the last 12 years before retirement I managed a deep learning team at Capital One, worked for a while at Google and three other AI companies) and I only allocate about 2 hours a day to experiment with LLMs so I like to be efficient with my time.


Ollama[1] + Ollama WebUI[2] is a killer combination for offline/fully local LLMs. Takes all the pain out of getting LLMs going. Both projects are rapidly adding functionality including recent addition of multimodal support.

[1] https://github.com/jmorganca/ollama

[2] https://github.com/ollama-webui/ollama-webui


You should be able to run Q3 and maybe even Q4 quants with 32GB. Even with the GPU as you can up the max RAM allocation with: 'sudo sysctl iogpu.wired_limit_mb=12345'


That is a very interesting discussion. Weird to me that the quantization code wasn’t required to be in the same PR. Ika is also already talking about a slightly higher 2.31bpw quantization, apparently.


Worth reading: https://en.wikipedia.org/wiki/Jasenovac_concentration_camp

> Operated by the governing Ustaše regime, Europe's only Nazi collaborationist regime that operated its own extermination camps

> It quickly grew into the third largest concentration camp in Europe

> Unlike German Nazi-run camps, Jasenovac lacked the infrastructure for mass murder on an industrial scale, such as gas chambers. Instead, it "specialized in one-on-one violence of a particularly brutal kind", and prisoners were primarily murdered with the use of knives, hammers, and axes, or shot

> Ustaše regime having murdered somewhere near 100,000 people in Jasenovac between 1941 and 1945

--

Flower Monument on Spomenik Database: https://www.spomenikdatabase.org/jasenovec


Also worth checking out for more general use of LLMs in emacs: https://github.com/karthink/gptel



I didn't try the other ones, but the one I mentioned is the most frictionless way to use several different LLMs I came across so far. I had very low expectations, but this package has good sauce


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: