In all fairness, being both an avid Deutsche Bahn victim (with the Gold victim status), and knowing the German court system ... that was perfectly plausible, if a bit optimistic. I'd do many, many things if I got a 50% chance of arriving on time.
No joke: 15 years ago, when I was riding DB trains regularly, I got whole packs of refund forms. Took a while to find someone who would not refuse this request. I built a rudimentary transparent template in latex that had my name, address, etc. Pushed a whole pack into a printer to fill out most of the forms, leaving only the date and train to be manually inserted. My trains were always delayed, so this saved a lot of time.
If you actually took a train in the last 3 years you would know that the process is know online via the App/Website, and everything is already filled out for you
These changes are effective April 1st for existing and new customers. The price increase ratios are also different across product lines.
* Cloud (VMs): 38%
* Bare metal: 15%
* Memory add-on for bare metal: 575% (effective immediately)
It feels like memory add-on is intentionally set high to discourage customers from adding more memory.
AX102 (128 GB RAM) costs €124, AX162 (256 GB RAM) costs €244, but the 128 GB memory add-on alone costs €264. If we ignore the setup fee, it’s more cost-effective to provision additional servers instead of adding RAM to bare metal instances.
By the same time next year the prices likely gone down, although maybe not to the pre-increase, but surely much lower than currently. Putting it in my calendar to revisit this comment in a year :)
Well, if all the doomers and gloomers were correct that this is the end of hardware at home, we'd see the price continue to increase, and suppliers trying to ramp up production, even if it'd take long time.
The fact that it stabilized (at whatever price) and that suppliers aren't even thinking about ramping up production, should tell people that the doomers and gloomers were yet again over-reacting to things they don't fully understand themselves.
> Grocery prices have also stabilized but I’m still paying too much
I think that's a local problem, if you happen to live in a country that's trying to move over to isolationism rather than globalism as of late. In other modern countries the prices are also increasing, but at least following inflation somewhat so the increase doesn't seem as bad for us. Maybe at least yet? Who knows.
> Well, if all the doomers and gloomers were correct that this is the end of hardware at home, we'd see the price continue to increase, and suppliers trying to ramp up production, even if it'd take long time.
Ramping up production takes months and paying back the price to ramp up production takes years. Manufacturers have started investing in more production capacity but it'll take a while before supply can be sold off.
Based on interviews with industry professionals, I believe the forecast is that RAM prices will start going down again between August and the end of next year. Until then, prices will climb as stock depletes and RAM production is capped.
> Manufacturers have started investing in more production capacity
Where are you getting this from? Because that's not what I've seen, if anything the industry seems to lowering the production capacity, not increasing it.
And even if it takes years, if they thought it was a sustainable growth in demand, they'd at least be moving in that direction which again, doesn't seem to be happening.
> I believe the forecast is that RAM prices will start going down again between August and the end of next year. Until then, prices will climb as stock depletes and RAM production is capped.
That is a mere short-term growth plateau as buyers curtail back spending - which has already happened twice - not true stabilization. Analyst firms like Trendforce expect 15-20% more increase well into end of Q2/Q3.
Before today, we used to be able to order an AX162-R for €207 and add 128 GB of RAM for €46. Starting today, the same calculator provides €207 for an AX162-R (*) and €264 for the 128 GB RAM add-on. Sadly, HN doesn't let me upload screenshots.
(*) The price change for AX162-R machines is effective starting April 1st.
> These changes are effective April 1st for existing and new customers.
Checking today doesn't really indicate anything.
It's worth noting that the hardware price of RAM is up at least 550% yoy, so this was always going to happen as soon as their existing contracts had to be renewed
I feel this analysis is unfair to PostgreSQL. PG is highly extensible, allowing you to extend write-ahead logs, transaction subsystem, foreign data wrappers (FDW), indexes, types, replication, others.
I understand that MySQL follows a specific pluggable storage architecture. I also understand that the direct equivalent in PG appears to be table access methods (TAM). However, you don't need to use TAM to build this - I'd argue FDWs are much more suitable.
Also, I think this design assumes that you'd swap PG's storage engine and replicate data to DuckDB through logical replication. The explanation then notes deficiencies in PG's logical replication.
I don't think this is the only possible design. pg_lake provides a solid open source implementation on how else you could build this solution, if you're familiar with PG: https://github.com/Snowflake-Labs/pg_lake
All up, I feel this explanation is written from a MySQL-first perspective. "We built this valuable solution for MySQL. We're very familiar with MySQL's internals and we don't think those internals hold for PostgreSQL."
I agree with the solution's value and how it integrates with MySQL. I just think someone knowledgeable about PostgreSQL would have built things in a different way.
Thanks for providing this from PG perspective. Also wonder if storage engine such as OrioleDB would be better suited for FDWs to handle consistency between copies of the same data between DuckDB?
Actually, that’s not the case. I also support PostgreSQL products in my professional work. However, specifically regarding this issue—as I mentioned in my article—it is simply easier to integrate DuckDB by leveraging MySQL's binlog and its pluggable storage engine architecture.
Also, do you know if their benchmarks are available?
In their website, the benchmarks say “Multilingual (Chinese), Multilingual (East-asian), Multilingual (Eastern europe), Multilingual (English), Multilingual (Western europe), Forms, Handwritten, etc.” However, there’s no reference to the benchmark data.
Honestly. Just pay snowflake for the amazing DB and ecosystem it is. And then go build cool stuff unless your value add to customers is infra let them handle all that.
Everytime you want to query your data, you need to pay the compute cost.
If instead you can write to something like Parquet/Iceberg, you're not paying for access your data.
Snowflake is great at aggregations and other stuff (seriously, huge fan of snowflakes SQL capabilities), but let's say you have a visualisation tool, you're paying for pulling data out
.
Instead, writing data to something like S3, you instead can hookup your tools to this.
It's expensive to pull data out of Snowflake otherwise.
Ok so I build my data lake on s3 using all open tech. I’m still paying for S3 for puts and reads and lists.
Ok I put it on my own hardware. In my own colo. you’re still paying electricity and other things. Everything is lock in.
On top of that you’re beholden to an entire community of people and volunteers to make your tech work. Need a feature? Sponsor it. Or write it and fight to upstream it. On top of that if you do this at scale at a company what about the highly paid team of engineers you have to have to maintain all this?
With snowflake I alone could provide an entire production ready bi stack to a company. And I can do so and sleep well at night knowing it’s managed and taken care of and if it fails entire teams of people are working to fix it.
Are you going to build your own roads, your own power grid, your own police force?
Again my point remains. The vast majority of times people build on a vendor as a partner and then go on to build useful things.
Apple using cloud vendors for iCloud storage. You think they couldn’t do it themselves? They couldn’t find and pay and support all the tech their own? Of course they could. But they have better things to do than to reinvent the wheel I.e building value on top of dumb compute and that’s iCloud.
After running Snowflake in production for 5+ years I would rather have my data on something like Parquet/Iceberg (which Snowflake fully supports...) than in the table format Snowflake has.
I think a hybrid approach works best (store on Snowflake native and iceberg/tables where needed), and allows you the benefit of Snowflake without paying the cost for certain workloads (which really adds up).
We're going to see more of this (either open or closed source), since Snowflake has acquired Crunchydata, and the last major bastion is "traditional" database <> Snowflake.
They didn't do it out of good will. They realized that's where the market was going and if their query engine didn't perform as well as others on top of iceberg, then they'd be another Oracle in the long-term.
Teams of the smartest people on earth make these kind of big vendor decisions, vendor lock-in is top of mind, I tell anyone who will listen to avoid databricks live tables and their sleezy sales reps pushing it over cheaper less locked in solutions
Snowflake is expensive, even compared to Databricks, and you pay their pre-AWS discount storage price while they get the discount and pocket the difference as profit
If the benchmark doesn’t use AIO, why the performance difference between PG 17 and 18 in the blog post (sync, worker, and io_uring)?
Is it because remote storage in the cloud always introduces some variance & the benchmark just picks that up?
For reference, anarazel had a presentation at pgconf.eu yesterday about AIO. anarazel mentioned that remote cloud storage always introduced variance making the benchmark results hard to interpret. His solution was to introduce synthetic latency on local NVMes for benchmarks.
Please note that LLMs progressed at a rapid pace since Feb. We see much better results with the Qwen3-VL family, particularly Qwen3-VL-235B-A22B-Instruct for our use-case.
Magistral-Small-2509 is pretty neat as well for its size, has reasoning + multimodality, which helps in some cases where context isn't immediately clear, or there are few missing spots.
I agree with you. I feel the challenge is that using AI coding tools is still an art, and not a science. That's why we see many qualitative studies that sometimes conflict with each other.
In this case, we found the following interesting. That's why we nudged Shikhar to blog about his experience and put a disclaimer at the top.
* Our codebase is in Ruby and follows a design pattern uncommon industry
* We don't have a horse in this game
* I haven't seen an evaluation that evaluates coding tools in (a) coding, (b) testing, and (c) debugging dimension
Just in case you have $3-4M lying around somewhere for some high quality inference. :)
SGLang quotes a 2.5-3.4x speedup as compared to the H100s. They also note that more optimizations are coming, but they haven't yet published a part 2 on the blog post.
Isn't Blackwell optimized for FP4? This blog post runs Deepseek at fp8, which is probably the sweet spot but new models with fp4 native training and inference would be drastically faster than fp8 on blackwell.
I also found the HN discussion at the time informative: https://news.ycombinator.com/item?id=6457801
reply