Hacker Newsnew | past | comments | ask | show | jobs | submit | __turbobrew__'s commentslogin

> I know, but the density of the data is much less in human case.

Is that really the case? How much data is it for 4k video, high bitrate auditory, spacial mapping, internal and external nervous system, emotions, and a dataset to correlate all of these in time?


You know what they say, shit rolls downhill. I don't personally know the CEO, but the feeling I have got from their public fits on social media doesn't instill confidence.

If I was a CF customer I would be migrating off now.


The way I think of it is that a DB should only be accessed by a single replicaset in k8s. Only processes of identical code should share the DB. Everything else is through RPC interfaces.

This is how large scale systems are built, but the pattern makes less sense the smaller your footprint is.


> for example, it'd be easy to write code that creates a resources based on the current time of day

Some languages make that impossible to do, for example starlark.


Project 2025 was publicly available prior to the election. Tariffs were one of the many policies within the larger plan. If you voted for Trump you are responsible for the Tariffs, this is not a hoodwink where Trump rug pulled everyone after getting elected — it was literally there in the open.

Even beyond/disregarding Project 2025, tariffs were a well-known part of the GOP platform in 2024; it was even included and discussed at the Presidential Debate. The Harris platform even called it a tax at that time, to attempt to make it quite clear to the voter who, in the end, would bear the cost, and the Trump platform equivocated on who would pay the tax to distract from that Harris was right.

Even if you knew nothing of Project 2025 (somehow), you were warned.


On top you have news outlets and educated people not being clear what they are. See from the article:

He has long argued tariffs boost American manufacturing - but many in the business community, as well as Trump's political adversaries, say the costs are passed on to consumers

It’s reported as if someone still needs to figure out who pays the tariffs in the end. I’m aware that tariffs are a lever to potential move buying behavior and give incentives to move production locally. But in this instance and how it’s/ was implemented it’s clear who is the paying for it.


“ Even beyond/disregarding Project 2025, tariffs were a well-known part of the GOP platform in 2024;”

The tariff stuff is just a variation of the republican dream to replace income tax with a sales tax. Big tax cut for higher incomes while raising taxes for lower incomes.


> The AI should decide

That is a great recipe for systematic discrimination.


The whole goal of a hiring pipeline is to create a system to discriminate from a ton of candidates to good people to hire.

Ceph has synchronous replication, writes have to be acked by all replicas before the client gets an ack. Fundamentally, the latency of ceph is at least the latency between the OSDs. This is a tradeoff ceph makes for strong consistency.

I know. We run it for near a decade now. I mentioned it because a lot of uses for minio are pretty small.

I had 2 servers at home running their builtin site replication and it was super easy setup that would take far more of both hardware and work to replicate to ceph so while ceph might be theoretically fitting feature list, realisticially it isn't an option.


The Ceph way of doing asynchronous replication would be to run separate clusters and ship incremental snapshots between them. I don't know if anyone's programmed the automation for that, but it's definitely doable. For S3 only, radosgw has it's own async replication thing.

https://ceph.io/en/news/blog/2025/stretch-cluuuuuuuuusters-p...

Disclaimer: ex-Ceph-developer.


If you setup ceph correctly (multiple failure domains, correct replication rules across failure domains, monitors spread across failure domain, osds are not force purged) it is actually pretty hard to break it. Rook helps a lot too as rook makes it easier to set up ceph correctly.

Definitely, ceph shines in the 1-100 petabyte range whereas minio excelled in the 0-1 petabyte range.

Regarding aistore the recommended prod configuration is kubernetes, which brings in a huge amount of complexity. Also, one person (Alex Aizman) has about half of the total commits in the project, so it seems like the bus factor is 1.

I could see running Aistore in single binary mode for small deployments, but for anything large and production grade I would not touch Aistore. Ceph is going to be the better option IMO, it is a truly collaborative open source project developed by multiple companies with a long track record.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: