If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.
2025 Ioniq 5s have the NACS port (only in NA markets, obviously). I don't think the 2025 Ioniq 6s do, though they probably will for the next model year. Like you said, CCS-equipped cars can use the Tesla network with a NACS-to-CCS adapter.
I really liked this talk as well. One part that I'm not sure I can fully agree with, though, is the idea of fully re-conceptualizing the self. It is possible to self-author, and partially change...but I have never heard of nor met anyone who just became a totally different person. I'm willing to concede that he may have been speaking hypothetically, or that maybe the idea of changing the self will be more accessible to AGI rather than for humanity.
It would indeed be better to create appropriately sized storage.
However, I don't think that underlying array is resized every time `add` is called. I'd expect that resize will happen less than 30 times for 1M adds (capacity grows geometrically with a=10 and r=1.5)
I was in SF back in May, but didn't manage to get through the waitlist :(
It was so cool to see them diving around.
Interestingly, when I showed the clips to some of my senior family members, they didn't seem interested at all. I think they couldn't comprehend what was going on, even after I explained.
Their (several independent trials) reaction was similar to showing them some AI-generated image of something which clearly can't exist. It was so absurd that it was just filtered out with a comment "yeah, yeah - nice car".
A “productivity hack” for folks who can’t afford this and already own iPad+Pencil which they primarily use indoors: switch to grayscale mode, it is awesome :)
That was my experience as well - 3-bit version is pretty good.
I also tried 2-bit version, which was disappointing.
However, there is a new 2-bit approach in the works[1] (merged yesterday) which performs surprisingly well for Mixtral 8x7B Instruct with 2.10 bits per weight (12.3 GB model size).
I could only run 2-bit q2 mode on my 32G M2 Pro. I was a little disappointed, but I look forward to try the new approach you linked. I just use Mistral’s and also a 3rd party hosting service for now.
After trying the various options for running locally, I have settled on just using Ollama - really convenient and easy, and the serve APIs let me use various LLMs in several different (mostly Lisp) programming languages.
With excellent resources from Hugging Face, tool providers, etc., I hope that the user facing interface for running LLMs is simplified even further: enter your hardware specs and get available models filtered by what runs on a user’s setup. Really, we are close to being there.
Off topic: I hope I don’t sound too lazy, but I am retired (in the last 12 years before retirement I managed a deep learning team at Capital One, worked for a while at Google and three other AI companies) and I only allocate about 2 hours a day to experiment with LLMs so I like to be efficient with my time.
Ollama[1] + Ollama WebUI[2] is a killer combination for offline/fully local LLMs. Takes all the pain out of getting LLMs going. Both projects are rapidly adding functionality including recent addition of multimodal support.
You should be able to run Q3 and maybe even Q4 quants with 32GB. Even with the GPU as you can up the max RAM allocation with:
'sudo sysctl iogpu.wired_limit_mb=12345'
That is a very interesting discussion. Weird to me that the quantization code wasn’t required to be in the same PR. Ika is also already talking about a slightly higher 2.31bpw quantization, apparently.
> Operated by the governing Ustaše regime, Europe's only Nazi collaborationist regime that operated its own extermination camps
> It quickly grew into the third largest concentration camp in Europe
> Unlike German Nazi-run camps, Jasenovac lacked the infrastructure for mass murder on an industrial scale, such as gas chambers. Instead, it "specialized in one-on-one violence of a particularly brutal kind", and prisoners were
primarily murdered with the use of knives, hammers, and axes, or shot
> Ustaše regime having murdered somewhere near 100,000 people in Jasenovac between 1941 and 1945
I didn't try the other ones, but the one I mentioned is the most frictionless way to use several different LLMs I came across so far. I had very low expectations, but this package has good sauce