If we ignore the fact that value types likely won’t ship before we have flying cars, Java has evolved greatly. I really like how they’ve solved concurrency, but I dislike how they’ve handled modules but this a minor issue.
The main problem with Java has always been its build tools. They’ve consistently been bad and continue to be. Even today, creating a bundled application with a stripped down JDK with jlink and jpackage is incredibly painful. You’ll need extensive knowledge of the Java CLI, modules, build tool plugins, or tools like Mill which simplify using jlink and jpackage, but even then it remains complex and frequently fails. In reality it should be as simple as something like "java package". Even these days, I frequently see Java developers presenting their desktop apps on Reddit and if you look at how they deploy them, it's often a fat JAR because they struggle to use jlink and jpackage effectively. Ironically, even generateing a fat JAR can be challenging.
As someone who has spent over two decades developing desktop applications including witnessing the shift to horrendous Electron firsthand I can tell you that this was a primary reason Java basically vanished from being so prevalent on desktops. Inexperienced developers often struggled to deploy even simple Java applications, grappling with either runtime incompatibilities (Ironically, developers are somewhat reintroducing the same problem with WebView-based applications) or having to tell their users how to launch a Java app. While some claim desktop apps are dead – which is nonsense – the same applies to CLI applications. CLI apps remain prevalent, primarily written in native languages or Golang, sometimes even Node or one of its derivatives. Rarely Java for the reasons I just mentioned and don't get me started with Graal Native. If someone decides to write a simple trivial CLI app in Golang, they'll simply build it with a single command and be done with it. With Graal Native, you'll have to go through using the tracing agent, and if you're lucky you'll have a fat native executable after minutes of compile time. Forget Graal for a minute though. Java would already go a long way if bundling your application with a stripped down JDK (jlink) was as easy as typing in a single command without having to deal with complicated Maven or Gradle plugins.
TIL Mill, I've been in build hell trying to package a javafx GUI gradle project that depends on a non-module-ified lib (usb4java, long story, no I can't use anything else). Beryx/badass failed entirely, was able to get something working with Gradle doing jlink and manual CLI jpackage ...
But tbh the whole experience makes me distrust the Java ecosystem if you're supporting anything that is slightly out of the community's view or priorities. Even JavaFX shows very patchy support for certain very standard UI concepts, and the situation with packaging is bad as you say.
Anyway, is mill worth switching away from Gradle? (Does mill integrate at all with idea?)
Mill does work with idea (via the build server protocol), and the ideas behind it are very sound (a build tool is basically a function calling other functions - on which they depend. You just want to parallelize their running and cache their results.
But it does have a learning curve and you may sometimes end up having strange error messages. (As an implementation, it's basically Scala macros turning normal looking scala functions into a static task graph). It is getting better and better support for mainstream java build setups, and it's possibly the best tool for something very custom. In between the two extremes, you may or may not have a better time with Gradle/Maven.
I’ve reached a point where I stop reading whenever I see a post that mentions “one-shot.” It's becoming increasingly obvious that many platforms are riddled with bots or incompetent individuals trying to convince others that AI are some kind of silver bullet.
RAM encryption doesn’t prevent DMA attacks and perofming a DMA attack is quite trivial as long as the machine is running. Secure enclaves do prevent those and they're a good solution. If implemented correctly, they have no downsides. I'm not referring to TPMs due to their inherent flaws; I’m talking about SoC crypto engines like those found in Apple’s M series or Intel's latest Panther Lake lineup. They prevent DMA attacks and side-channel vulnerabilities. True, I wouldn’t trust any secure enclave never to be breached – that’s an impossible promise to make even though it would require a nation-state level attack – but even this concern can be easily addressed by making the final encryption key depend on both software key derivation and the secret stored within the enclave.
It really was Oracle’s fault – they neglected deployment for too long. Deploying Java applications was simply too painful, and neither JLink nor JPackage existed.
> Customers simply don't care. I don't recall a single complain about RAM or disk usage of my Electron-based app to be reported in the past 10 years.
Nothing is worse than reading something like this. A good software developer cares. It’s wrong to assume customers don't care simply because they don't know what's going on under the hood. Considering the downsides and the resulting side effects (latency, more CPU and RAM consumption, fans spinning etc.), they definitely do care. For example, Microsoft has been using React components in their UI, thinking customers wouldn’t care, but as we have been seeing lately, they do care.
I've always liked Scala as a language, but it's challenging to write high-performing and memory-efficient code on the JVM in general. Whenever you raise this issue, you'll encounter a horde of JVM fanboys who insist that it’s not true, giving you all kinds of nonsense excuses and accusing you of not measuring performance or memory consumption properly. If you genuinely want to produce well-performing JVM code, you're essentially writing C-style Java. As soon as you introduce abstraction, performance issues inevitably arise – largely due to the fact that features and modernizations from Project Valhalla haven’t yet been implemented/shipped. Scala proponents will suggest using macros and opaque types, but at scale this approach becomes incredibly cumbersome and even then you won't be able to completely prevent boxing that would actually be unnecessary; you could just as well be writing Rust.
My main machines have been running Linux for years now, but there are still some things that are really bothering me. For one, I think dealing with virtual machines are still somewhat painful on Linux. VM managers continue to be clunky (I believe KDE is working on a new one), and GPU acceleration, let alone partitioning, isn’t really a thing for Windows guests which is something that works out of the box on WSL. Another frustrating part is the lack of a proper alternative to Windows Hello that allows you to set up passkeys using TPMs.
I think you’re referring to the ability to split a physical NVIDIA GPU into multiple virtual GPUs so that you can do full GPU pass-through with one card (without having to resort to hacks like disconnecting host sessions.)
What vm-curator provides is an easy way to use QEMU”s built in para-virtualization (virtio-vga-gl, a.k.a. virgl) in a manner that works with NVIDIA cards. This is not possible using libvirt based tools because of a bug between libvirt and NVIDIA’s Linux drivers.
I’m not trying to defend Microsoft, but I think people are being a bit dramatic. It's a fairly reasonable default setting for average users who simply want their data protected from theft. On the other hand, users should be able to opt out from the outset, and above all, without having to fiddle with the manage-bde CLI or group policy settings.
With Intel Panther Lake (I'm not sure about AMD), Bitlocker will be entirely hardware-accelerated using dedicated SoC engines – which is a huge improvement and addresses many commonly known Full Disk Encryption vulnerabilities. However, in my opinion some changes still need to be made, particularly for machines without hardware acceleration support:
- Let users opt out of storing recovery keys online during setup.
- Let users choose between TPM or password based FDE during setup and let them switch between those options without forcing them to deal with group policies and the CLI.
- Change the KDF to a memory-hard KDF - this is important for both password and PIN protected FDE. It's 2026 - we shouldn't be spamming SHA256 anymore.
- Remove the 20 char limit from PIN protectors and make them alphanumerical by default. Windows 11 requires TPM 2.0 anyway so there's no point in enforcing a 20 char limit.
- Enable TPM parameter encryption for the same reasons outlined above.
It’s not that simple because most people will instinctively click ‘no’ without fully understanding the risks. They'll assume that as long as they don't forget their password, it’ll be fine – which is the case on Macs because, unlike PCs, Mac hardware is locked down. Mac users won’t ever be required to enter a recovery key just because they’ve installed an update.
> If you don’t think Intel put back doors into that then I fear for the future.
If that’s what you’re worried about, you shouldn’t be using computers at all. I can pretty much guarantee that Linux will adopt SoC based hardware acceleration because the benefits – both in performance and security – outweigh the theoretical risks.
> This is by far one of the best advertisements for LUKS/VeraCrypt I've ever seen.
LUKS isn't all rainbows and butterflies either [https://news.ycombinator.com/item?id=46708174]. This vulnerability has been known for years, and despite this, nothing has been done to address it.
Furthermore, if you believe that Microsoft products are inherently compromised and backdoored, running VeraCrypt instead of BitLocker on Windows likely won’t significantly improve your security. Implementing a VeraCrypt backdoor would be trivial for Microsoft.
The main problem with Java has always been its build tools. They’ve consistently been bad and continue to be. Even today, creating a bundled application with a stripped down JDK with jlink and jpackage is incredibly painful. You’ll need extensive knowledge of the Java CLI, modules, build tool plugins, or tools like Mill which simplify using jlink and jpackage, but even then it remains complex and frequently fails. In reality it should be as simple as something like "java package". Even these days, I frequently see Java developers presenting their desktop apps on Reddit and if you look at how they deploy them, it's often a fat JAR because they struggle to use jlink and jpackage effectively. Ironically, even generateing a fat JAR can be challenging.
As someone who has spent over two decades developing desktop applications including witnessing the shift to horrendous Electron firsthand I can tell you that this was a primary reason Java basically vanished from being so prevalent on desktops. Inexperienced developers often struggled to deploy even simple Java applications, grappling with either runtime incompatibilities (Ironically, developers are somewhat reintroducing the same problem with WebView-based applications) or having to tell their users how to launch a Java app. While some claim desktop apps are dead – which is nonsense – the same applies to CLI applications. CLI apps remain prevalent, primarily written in native languages or Golang, sometimes even Node or one of its derivatives. Rarely Java for the reasons I just mentioned and don't get me started with Graal Native. If someone decides to write a simple trivial CLI app in Golang, they'll simply build it with a single command and be done with it. With Graal Native, you'll have to go through using the tracing agent, and if you're lucky you'll have a fat native executable after minutes of compile time. Forget Graal for a minute though. Java would already go a long way if bundling your application with a stripped down JDK (jlink) was as easy as typing in a single command without having to deal with complicated Maven or Gradle plugins.
reply