Hacker Newsnew | past | comments | ask | show | jobs | submit | bjconlan's commentslogin

I'm curious as to how Palantir has been used during the war or Iran (if at all or does it suffer from subjective bias). I know there were larger movements at play on a political level here but I'm becoming concerned about how much one "thought group" (in private corps) is having on the world's largest war machine. might be dulling critical thinking.

The term "Merchants of Death" comes to mind. Easy kills, done dirt cheap, appeals to state level despots.

You must be trolling. Did you read the article?

Sorry, I did compose it from my phone and wasn't clear. I realize the article talks to the Maven tooling I was talking to it's data aggregation and modeling suite more generally. (And obviously would have been used as part of the Iranian engagement. The "if at all" statement was meant to be "tongue in cheek" given the current mess being reported)

Token usage and agent usage optimisation?

It seems like a real problem for me. Probably because I'm not overly inspired to pay for a Claude x5 subscription and really hate the session restrictions (esp when weekly expend at the end of the week can't be utilized due to session restrictions) on a standard pro model. Most of my tasks are basically using superpowers and I find I get about 30-90m of usage per session before I run out of tokens (resets about every 4 hours after which I generally don't get back to until the next day (my weekly usage is about 50% so lots of wastage due to bad scheduling). A tool like this could add better afk like agent interoperability through batching etc as a one tool fits all like scenario.

If this gets its foot in the door/market-share there is plenty of runway here for adding more optimized agent utilization and adding value for users.


Agreed on the need, and this space needs more exploration that is not going to come from big-cos as they are incentivised in boosting spend. I've been exploring the same problem statement, but with a different approach https://github.com/hsaliak/std_slop/blob/main/docs/CONTEXT_M....

The comment was more around how to make their approach sticky.. I feel that local SLMs can replicate what this product does.


I used to work for a human that did this (sits mostly on the classical therapeutics side). He actually started a business where he was reviewing and auditing the submission processes outlining approvals but he had been around the game enough to know where the next submission would put them in the approvals process for a number of agencies.

https://maestrodatabase.com/

Looks like he's still on top of everything given the most recent blog post is from 6/2/2026.

I believe the insights here could be useful given he has sense of when the penultimate submission has occured (but I'm not entirely sure what that is on a % basis nor as a basis for if the stock of the company reacts)


Yes we know of a few. Honestly, it was pretty hard to even find a good catalyst calendar for this space.

I'll give it a read to learn more. Thanks for the note!


Yeah, but it's specifically testing things that implement against a posix API (because generally that's what "native" apis do (omiting libc and other os specific foundation libraries that are pulled in at runtime or otherwise) I would suspect that if the applications that linked against some wasi like runtime it might be a better metric (native wasi as a lib/vs a was runtime that also links) mind you that still wouldn't help the browser runtime... But would be a better metric for wasm (to native) performance comaparison.

But as already mentioned we have gone through this all before. Maybe we'll see wasm bytecodes pushed through silicon like we did the Jvm... Although perhaps this time it might stick or move up into server hardware (which might have happened, but I only recall embedded devices supporting hardware level Jvm bytecodes).

In short the web browser bit is omitted from the title.


Or toit, which unsurprisingly has Lars Bak involved. A man with history touching all self, Jvm and v8 codebases.

I wouldn't be surprised if toit primaries, Kasper or Florian also have experience in these technological intersections.


Perhaps if a supply chain attack is your largest concern then using some well vetted system like wolfi is more up your alley. (See some of their related repos on GitHub https://github.com/projectbluefin - I've been following the development of it and currently it still under development.)

Again "vetting" is a source of contention here as I'm not sure how the quality of official rpm sources compare to those outlined in an sbom


Yeah, but there is something else here too... I used cachy for a heartbeat and it advertises the same benefits; it just felt slower (notably on boot) Maybe it was just all the graphical load screens.

There's something clear had that made it feel modern, familiar and boring (which might not be for everyone) 90% of my tasks were in vscode devcontainers so kept things simple and out of the system for the most part.


Sounds like bloat removal and minimalism.


Fingers crossed. I probably just did my last fresh install of this a couple of days ago and my last swupd update now. You will be missed...


I do love the warnings here... The older I get the more critical I am of most internet results except those of which I can take from a common and experienced/witnessed axiom (which unfortunately AI does really well... At least entrusting me to said point). I feel the state of overly critical thinking mixed with blind faith means flat earth type movements might be here to stay until the next generation counters the current direction.

But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?


That might be RAG’s benefit if LLMs were more steerable but they can be stubborn.


While I’m receptive to the fact that RAGs have performance limitations, and that graph database-based solutions may avoid hallucinations, wouldn’t your rhetorical position be best served by offering a trial portal for users to upload their own document corpora and see for themselves that prompts to Stardog never result in hallucinations? Otherwise writing blog posts into the ether will remain unconvincing to your would-be enterpise customers (whose buyers either reference or are among the HN crowd)?


Kendall the blog link at the end for semantic parsing gives a 404 error.


Fixed. Thanks.


I must say this is amazing. The psychology and manipulation makes me realize how poor I am regarding trust even when the other side is pushing for some unconfirmed equilibrium.

In the game I acknowledge that I was aligned to the "Simpleton" strategy (before it was outlined). Looks like simpleton might actually be applicable in a more general sense too which is a little disheartening.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: