Hacker Newsnew | past | comments | ask | show | jobs | submit | a-dub's commentslogin

does preempt_rt actually confer a ui responsiveness benefit on multicore systems?

i wish the ui on those things was more visually appealing. between the cheap looking gloss finish on the display itself and the unextraordinary ui, it's just kinda blah. one can have a debate about to screen or not to screen or whether to use vfd displays or whatever and i get the importance of cost control but it should look good and it really doesn't. the graphic of the car looks like a cartoon.

Interesting take–– I feel the total opposite; I love the UI.

i think a lot of people do. i don't know what it is, there's maybe just something about the car graphic that doesn't sit right with me. the front/side view when parked just seems cheesy for some reason. maybe because it's meant to show unclosed doors or something and when everything is set the car's status is car which is redundant.

It does show open doors etc. but if not that then what would you show on the screen? You can already shrink it so the rightmost 3/4 of the screen is the map, leaving just 1/4 of the screen for the car visualization and indicators.

maybe it's the quasi-photorealistic nature of the car image that bothers me. it's not a photo, it's not a schematic, it's not a diagram. it's too artificial to look like a photo, yet too realistic to look like a schematic. or maybe the physically implausible lighting.

Animations could probably be faster and touch areas for opening trunk/frunk could be larger.

But then I'm driving brand new Fiat RV with CarPlay this week. Cruise control by itself has 2 or 3 bugs, and that's not even trying to be picky. Or how's this - can't hotspot from me while phone is in carplay. Can't pinch-zoom or pan maps in 2026 and myriad other things that makes me cringe when people moan they don't buy Tesla because lack of carplay.


It's glass...

json in git for reference data actually isn't terrible. having it with the code isn't great, and the repo is massively bloated in other ways, but for change tracking a source of truth, not bad except for maybe it should be canonicalized.

It's not a terrible storage mechanism but 36,625 workflow runs taking between ~1-12 minutes seems like a terrible use of runner resources. Even at many orgs, constantly actions running for very little benefit has been a challenge. Whether it's wasted dev time or wasted cpu, to say nothing of the horrible security environment that global arbitrary pr action triggers introduce, there's something wrong with Actions as a product.

doesn't the side by side view in github diff solve this?

conflict free merging sounds cool, but doesn't that just mean that that a human review step is replaced by "changes become intervals rather than collections of lines" and "last set of intervals always wins"? seems like it makes sense when the conflicts are resolved instantaneously during live editing but does it still make sense with one shot code merges over long intervals of time? today's systems are "get the patch right" and then "get the merge right"... can automatic intervalization be trusted?

edit: actually really interesting if you think about it. crdts have been proven with character at a time edits and use of the mouse select tool.... these are inherently intervalized (select) or easy (character at a time). how does it work for larger patches can have loads of small edits?


how about if i do nothing the internet assumes i'm a child and therefore does not track me, show me ads or permit doom scroll feeds. then if i want i can jump through some hoops and pay some money or something to get a digital id that lets me attach a zkp to all my http requests that then unlock the magic of ads, tracking and doom scroll feeds.

seems like a good plan to me.


That would be a solution if the people pushing this actually cared about "protecting kids."

But let's be honest, governments want a dragnet they can use to monitor/control all internet communication. The people running western democracies are equally as power hungry and zealously authoritarian (my ideas will bring utopia!) as the people running the CCP.

The only difference is, the CCP has permissionless authority, so they ended internet freedom in China decades ago. They didn't have to ask.

Western authoritarians on the other hand, have to fight a slow battle to cleverly grind you down over time, so that you get tricked into allowing them to gatekeep the internet. It hasn't worked so far. The next step (this one) is "okay, so you don't want to have to ask us permission before you visit a website...but won't anybody think of the poor beautiful innocent children???"

Emotions activated. Rational thought deactivated.

They'll get what they want because they always get what they want. And you'll be convinced it's good for you over time, because most people just follow whatever the mainstream "vibes" are, and the elite sets the vibes. It's amazing a free internet existed this long. Great while it lasted.


i'm only half joking. adding zkps to http requests is probably the correct privacy preserving technical solution that could be built into something sensible.

the bigger issue is that lawmakers are thinking in terms of smartphones, tablets and commercial pcs as shrink wrapped media consumption devices with a setup step... not protocol level support that preserves parts of computing and the internet they don't even really know exists. seems like the ietf should have lobbyists or something.


ZKPs don't buy anything, since an online service can sell them by the thousand and you're just trusting the client that it belongs to the actual user. You might as well just do "User-Age-Category: 18plus" then and save a headache.

> then if i want i can jump through some hoops and pay some money or something to get a digital id that lets me attach a zkp

Yeah, so some guy is selling his zkps by the millions for a dollar each. Since they're zkps you can't find out who it was, and the system is pointless.

no. you can pay verisign or google or the government of estonia or whatever for a digital id and they can issue you a zkp that is signed by them that attests whatever without giving up your identity.

i'd be curious about a head to head comparison of how much the c2 actually buys over a static aot compilation with something serious like llvm.

if it is valuable, i'd be surprised you can't freeze/resume the state and use it for instantaneous workload optimized startup.


> can't freeze/resume the state

I mean, both of your points are a thing, see https://www.azul.com/products/components/falcon-jit-compiler... for LLVM as a JIT compiler

and https://openjdk.org/jeps/483 (and in general, project Leyden)


yeah i remember learning this as a kid and being surprised. i originally thought laserdiscs were modern high tech, but then they turned out to actually be from the late 70s/early 80s with the primitive analog video encoding where red book audio cds of the mid to late 80s were actually digital.


BUT... Pioneer put AC-3 (Dolby Digital) surround on LaserDiscs before DVDs came out. So LaserDiscs were the first video medium to offer digital sound at home.

And at that point, most players sold were combo players that could also play CDs.

And there was one more disc format: CD Video. It was a CD-sized digital single that also had a LaserDisc section for the (analog) music video. I have a couple; one is Bon Jovi.


Was CD video compressed? I thought it existed at the same time as DVD but cheaper.


That's Video CD. It existed before DVD but survived alongside it (mainly in Asia) as a cheaper alternative.


no, apparently there was both. i was familiar with video cd which was mpeg-1 on a cd-rom (with some weird partitioning scheme). cd video is apparently a very obscure hybrid format with an analog video section and a digital audio section. https://en.wikipedia.org/wiki/CD_Video


No; it was analog LaserDisc video. I think "Video CD" was a flavor of CD-I, which was very popular in China and was used way, way beyond the introduction of DVD. Well into the 2000s, I think.


I just learned this in my 40s and am surprised. Very cool.


mmm. interesting and fun concept, but it seems to me like the text is actually the right layer for storing and expressing changes since that is what gets read, changed and reasoned about. why does it make more sense to use asts here?

are these asts fully normalized or do (x) and ((x)) produce different trees, yet still express the same thing?

why change what is being stored and tracked when the language aware metadata for each change can be generated after the fact (or alongside the changes)? (adding transform layers between what appears and what gets stored/tracked seems like it could get confusing?)


> why does it make more sense to use asts here

For one, it eliminates a class of merge conflict that arises strictly from text formatting.

I always liked the idea of storing code in abstraction, especially editors supported edit-time formatting. I enjoy working on other people's code, but I don't think anybody likes the tedium of complying with style guides, especially ones that are enforced at the SCM level, which adds friction to creating even local, temporary revisions. This kind of thing would obviate that. That's why I also appreciate strict and deterministic systems like rustfmt. Unison goes a little further, which is neat but I think they're struggling getting adoption because of that, even though I'm pretty sure they've got some better tooling for working outside the whole ecosystem. These decoupled tools are probably a good way to go.

I was messing around with a file-less paradigm that would present a source tree in arbitrary ways, like just showing a individual functions, so you have the things you're working on co-located rather than switching between files. Kind of like the old VB IDE.


An AST based conflict resolver could eliminate the same kind of merge conflicts on a text based RCS


Yeah I suppose that's true, too. You've got to do the conversion at some point. I don't know that you get any benefit of doing storing the text, doing the transformation to support whatever ops (deconflicting, etc.) and then transforming back to text again vs just storing it in the intermediate format. Ideally, this would all be transparent to the user anyway.


For one merge, yes. The fun starts when you have a sequence of merges. CRDTs put ids on tokens, so things are a bit more deterministic. Imagine a variable rename or a whitespace change; it messes text diffing completely.


I remember someone mentioning a system that operated with ASTs like this in the 70s or 80s. One of the affordances is that the source base did not require a linter. Everyone reading the code can have it formatted the way they like, and it would all still work with other people’s code.


It seems like that could be done in the editor if you auto-reformat on load and save. (Assuming there's an agreed-on canonical format.)


The point is that in an AST based storage, the AST itself is the source of truth. Therefore, there is no canonical format.


Related, I’d love an editor that’d let me view/edit identifier names in snake_case and save them as camelCase on disk. If anyone knows of such a thing - please let me know!


This is actually possible with glasses-mode in Emacs: https://codelearn.me/2025/02/24/emacs-glasses-mode.html

I think it sees very little usage though.


Sure. Presumably you could have localized source presentation, too.

But, yeah, I think a personalized development environment with all of your preferences preserved and that don't interfere with whatever the upstream standard is would be a nice upgrade.


100% agree. I think AST-driven tooling is very valuable (most big companies have internal tools akin to each operation Beagle provides, and Linux have Coccinelle / Spatch for example), but it's still just easier implemented as a layer on top of source code than the fundamental source of truth.

There are some clever things that can be done with merge/split using CRDTs as the stored transformation, but they're hard to reason about compared to just semantic merge tools, and don't outweigh the cognitive overhead IMO.

Having worked for many years with programming systems which were natively expressed as trees - often just operation trees and object graphs, discarding the notion of syntax completely, this layer is incredibly difficult for humans to reason about, especially when it comes to diffs, and usually at the end you end up having to build a system which can produce and act upon text-based diffs anyway.

I think there's some notion of these kinds of revision management tools being useful for an LLM, but again, at that point you might as well run them aside (just perform the source -> AST transformation at each commit) rather than use them as the core storage.


> but it's still just easier implemented as a layer on top of source code than the fundamental source of truth

Easier but much less valuable.


you can parse the text at any time pretty much for free and use anything you learn to be smarter about manipulating the text. you can literally replace the default diff program with one that parses the source files to do a better job today.


This is the fundamental idea behind git - to fully compute/derive diffs from snapshots (commits) and to only store snapshots. While brilliant in some ways - particularly the simplifications it allows in terms of implementation, I’ve always felt that dropping all information about how a new commit was derived from its parent(s) was wasteful. There have been a number of occasions where I wished that git recorded a rename/mv somehow - it’s particularly annoying when you squash some commits and suddenly it no longer recognizes that a file was renamed where previously it was able to determine this. Now your history is broken - “git blame” fails to provide useful information, etc. There are other ways of storing history and revisions which don’t have this issue - git isn’t the end of the line in terms of version control evolution.


I agree with this, I just don't think I agree with the Beagle approach (CRDT on AST as the source of truth) vs. the Git method (bytewise files as the source of truth) with something alongside.

Like, I think it's way easier to add a parallel construction to Git (via a formal method or even a magic file) which includes the CRDT for the AST than it is to make that the base unit of truth. It still lets you answer and interact at the higher level with "oh, this commit changed $SYM1 to $SYM2" without also destroying byte-level file information that someone finds important, and without changing the main abstraction from the human-space to the computer-space.


CRDT's trick is metadata. Good old diff guesses the changes by solving the longest-common-subsequence problem. There is always some degree of confusion as changes accumulate. CRDTs can know the exact changes, or at least guess less.


One nice thing about serializing/transmitting AST changes is that it makes it much easier to to compose and transform change sets.

The text based diff method works fine if everyone is working off a head, but when you're trying to compose a release from a lot of branches it's usually a huge mess. Text based diffs also make maintaining forks harder.

Git is going to become a big bottleneck as agents get better.


what do you actually gain over enforced formatting?

first you should not be composing releases at the end from conflicting branches, you should be integrating branches and testing each one in sequence and then cutting releases. if there are changes to the base for a given branch, that means that branch has to be updated and re-tested. simple as that. storing changes as normalized trees rather than normalized text doesn't really buy you anything except for maybe slightly smarter automatic merge conflict resolution but even then it needs to be analyzed and tested.


Diffs are fragile, and while I agree with that process in a world where humans do all the work and you aren't cutting a dozen different releases, I think that's a world we're rapidly moving away from.


in that case you probably flag a bunch of prs for release and it linearizes their order and rebases and tests each one a step ahead of your review (responding to any changes you make as you go).


Having a VCS that stores changes as refactorings combined with an editor that reports the refactorings directly to the VCS, without plain text files as intermediate format, would avoid losing information on the way.

The downside is tight coupling between VCS and editor. It will be difficult to convince developers to use anything else than their favourite editor when they want to use your VCS.

I wonder if you can solve it the language-server way, so that each editor that supports refactoring through language-server would support the VCS.


they've been the framework laptop since before there was a framework laptop.

worldwide onsite service response times and parts availability are top notch as well.


it can actually look across conversations, i would make sure to tell it not to. (one fun thing to do is to ask it to look across the past year and generate a claude wrapped where it roasts you.)

i also probably wouldn't use it for anything i don't know how to verify myself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: