Hacker Newsnew | past | comments | ask | show | jobs | submit | imtringued's commentslogin

It's pretty interesting that OpenGL achieved its stated goal and is the graphics API with the highest degree of compatibility across many devices.

Vulkan more or less also has that goal, but for then-current hardware 24 years later (2016). In this case (Intel HD Graphics 4400, Haswell?), there is unofficial support on Linux that can be enabled with some hacks, and it may or may not work. Similar support for my previous (desktop) AMD GPU generally worked fine. The situation for Haswell seems more iffy, though.

I tried to think about and leap seconds on their own don't seem to be a real problem. The problem is that leap seconds, minutes, hours, days, years, etc are a human interface concept and therefore only make sense to humans, but we've decided to force machines to deal with these human interface concepts as the primary way of dealing with time, when only the presentation layer should even know what a leap second is.

The second video you've linked is fake in every aspect in regards to the robot.

The robot is floating above the ground.

The paddle is phasing in and out of existence.

The robot has a realistic human hand and uses it to hit the ball.

The robot randomly turns around mid-air near the end of the video.

The robot looks nothing like a Unitree robot.

Oh, how could I forget, the entire robot looks so obviously fake even when disregarding all of the above that I can't believe you're even trying to analyze anything in that video.


The impressive part here isn't the movement itself. You can easily train a model to perform a "procedural animation" that includes a full body control policy. The hard part is making it reliable enough to perform long sequences of movements and adapting to differences in robot placement. In other words, performing a flawless stage play is the hardest part.

I'm afraid you might not understand what you're talking about. Animation is a geometry problem, while robotics is a dynamics problem. The latter is subject to constraints many times greater than the former. There is no such "easy" model as you imagined that can transform the former into the latter.

Nvidia started their GPGPU adventure by acquiring a physics engine and porting it over to run on their GPUs. Supporting linear algebra operations was pretty much the goal from the start.

They were also full of lies when they have started their GPGPU adventure (like also today).

For a few years they have repeated continuously how GPGPU can provide about 100 times more speed than CPUs.

This has always been false. GPUs are really much faster, but their performance per watt has oscillated during most of the time around 3 times and sometimes up to 4 times greater in comparison with CPUs. This is impressive, but very far from the "100" factor originally claimed by NVIDIA.

Far more annoying than the exaggerated performance claims, is how the NVIDIA CEO was talking during the first GPGPU years about how their GPUs will cause a democratization of computing, giving access for everyone to high-throughput computing.

After a few years, these optimistic prophecies have stopped and NVIDIA has promptly removed FP64 support from their price-acceptable GPUs.

A few years later, AMD has followed the NVIDIA example.

Now, only Intel has made an attempt to revive GPUs as "GPGPUs", but there seems to be little conviction behind this attempt, as they do not even advertise the capabilities of their GPUs. If Intel will also abandon this market, than the "general-purpose" in GPGPUs will really become dead.


GPGPU is doing better than ever.

Sure FP64 is a problem and not always available in the capacity people would like it to be, but there are a lot of things you can do just fine with FP32 and all of that research and engineering absolutely is done on GPU.

The AI-craze also made all of it much more accessible. You don't need advanced C++ knowledge anymore to write and run a CUDA project anymore. You can just take Pytorch, JAX, CuPy or whatnot and accelerate your numpy code by an order of magnitude or two. Basically everyone in STEM is using Python these days and the scientific stack works beautifully with nvidia GPUs. Guess which chip maker will benefit if any of that research turns out to be a breakout success in need of more compute?


What's particularly ironic is that economists are redundant from a mainstream economics perspective. They'd be the first job to cut.

>* Formal verification, which is very widely used in hardware and barely used in software (not software's fault really - there are good reasons for it).

When developing with C, model checking or at least fuzzing is practically mandatory, otherwise it is negligent.


As far as I know the hypothesis is that Elon knew before the election that he will be in trouble and tried to cozy up with Trump to cover his ass but it backfired hard.

They've already ceded the entire GPU programming environment to their competitor. CUDA is as relevant as it always has been.

The primary competitors are Google's TPU which are programmed using JAX and Cerebras which has an unrivaled hardware advantage.

If you insist on an hobbyist accessible underdog, you'd go with Tenstorrent, not AMD. AMD is only interesting if you've already been buying blackwells by the pallet and you're okay with building your own inference engine in-house for a handful of models.


I've started a Minecraft modpack called Omnifactory (obsoleted by Omnifactory) in 2020 and resumed it in december 2025, I've got to say the post Creative Tank [0] endgame is horrible.

The primary endgame resource is "Chaos Shards". You get 4 Chaos Shards for every Tier 7 + 8 miner pair. Making miners is fast, but you also need "Dragon Lair Data" whose primary source is Simulation Chambers. I have every creative item except the infinity solar panels. Each of those requires 36 infinity ingots, which in turn require 36 hearts of the universe which in turn require a Tier 9 and Tier 10 miner pair and I've got to say the endgame feels like an eternal waiting room because of those stupid infinity solar panels.

Now adding insult to injury, I've recently started adding hundreds of Simulation Chambers to the game and the ticks per second dropped so bad it ended up being a net negative in terms of the Chaos Shards production rate measured in real world hours. Having an absurd time gate like "Dragon Lair Data" where you can do nothing but wait is bad gameplay.

There's also the mild annoyance that your storage system is overflowing with millions of items due to how long it takes to make all six infinity solar panels.

Why am I posting this? I'm saying that the Minecraft developers obviously didn't forsee people making factory mods that cause performance to tank hard. The game engine was never meant for this kind of workload unlike say the Satisfactory Engine.

[0] The Creative Tank lets you duplicate any liquid, which includes almost all ingot types, giving you infinite base resources.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: