Not illegal per se in Germany but you won't find a legal job that doesn't require you to have a bank account. Benefits will also only be paid electronically (exceptions for some asylum seekers apply).
You also cannot get a tax refund or pay taxes without a bank account.
The comments explain the nuance there pretty well:
> This study had 16 participants, with a mix of previous exposure to AI tools - 56% of them had never used Cursor before, and the study was mainly about Cursor.
> My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learing curve.
Giving people a tool, that have no experience with it, and expecting them to be productive feels... odd?
That's a good point. Myself is the easiest person to fool.
I knocked together a quick analysis of my commit graphs going back several years, if you're interested: https://mccormick.cx/gh/
My average leading up to 2023 was around 2k commits per year. 2023 I started using ChatGPT and I hit my highest commits so far that year at 2,600. 2024 I moved to a different country, which broke my productivity. I started using aider at the end of 2024 and in 2025 I again hit my highest commits ever at 2,900. This year is looking pretty solid.
From this it looks to me like I'm at least 1.4x more productive than before.
As a freelancer I have to track issues closed and hours pretty closely so I can give estimates and updates to clients. My baseline was always "two issues closed per working day". These are issues I create myself (full stack, self-managed freelancer) so the average granularity has stayed roughly constant.
This morning I closed 8 issues on a client project. I estimate I am averaging around 4 issues per working day these days. I know this because I have to actually close the issues each day. So on that metric my productivity has roughly doubled.
I believe those studies for sure. I think there is nuance to using these tools well, and I think a lot of people are going backwards and introducing more bugs than progress through vibe coding. I do not think I have gone backwards, and the metrics I have available seem to agree with that assessment.
Love your approach and that you actually have "before vs. after" numbers to back it up!
I personally also use AI in a similar way, strongly guiding it instead of vibe-coding. It reduces frustration because it surely "types" faster and better than me, including figuring out some syntax nuances.
But often I jump in and do some parts by myself. Either "starting" something (creating a directory, file, method etc.) to let the LLM fill in the "boring" parts, or "finishing" something by me filling in the "important" parts (like business logic etc.).
I think it's way easier to retain authorship and codebase understanding this way, and it's more fun as well (for me).
But in the industry right now there is a heavy push for "vibe coding".
Vibrations are surely an issue with electromechanical systems but hardly with electronics. There are plenty of cheap electronic accessories for cars and you can observe that those keep functioning for years.
Since .NET 10 still doesn't support Type Libraries quite a few new Windows projects must be written in .NET Framework.
Microsoft sadly doesn't prioritize this so this might still be the case for a couple of years.
One thing I credit MS for is that they make it very easy to use modern C# features in .NET Framework. You can easily write new Framework assemblies with a lot of C# 14 features. You can also add a few interfaces and get most of it working (although not optimized by the CLR, e.g. Span). For an example see this project: https://www.nuget.org/packages/PolySharp/
It's also easy to target multiple framework with the same code, so you can write libraries that work in .NET programs and .NET Framework programs.
Most likely never will, because WinRT is the future and WinRT has replaced type libraries with .NET metadata. At least from MS point of view.
The current solution is to use the CLI tools just like C++.
However have you looked into ComWrappers introduced in .NET 8, with later improvements?
I still see VB 6 and Delphi as the best development experience for COM, in .NET it wasn't never that great, there are full books about doing COM in .NET.
That's not correct. You don't have to give your credit card details or even be logged in but you are still required to have any Visual Studio license. For hobbyists and startups the VS Community license is enough but larger companies need a VS Professional license even for the VS Build Tools.
How strict Microsoft is with enforcement of this license is another story.
You do not need a Professional or Enterprise license to use the Visual Studio Build Tools:
> Previously, if the application you were developing was not OSS, installing VSBT was permitted only if you had a valid Visual Studio license (e.g., Visual Studio Community or higher).
The license doesn't actually permit OSS development. Only compilation of near-unmodified third party OSS libraries.
You may not compile OSS software developed by your own organisation.
The OSS software must be unmodified, "except, and only to the extent, minor modifications are necessary so that the Open Source Dependencies can be compiled and built with the software."
Using VS build tools for open source development is covered by the Community licence [0], separate from this Build Tools licence change. That license is more open than you might expect, working as an individual it even permits proprietary development for commercial purposes.
Under that usage, the Community license counts as a valid Visual Studio license for Build Tools purposes, hence the second paragraph:
> This change expands user rights to the Build Tools and does not limit the existing Visual Studio Community license provisions around Open-Source development. If you already are a developer contributing to OSS projects, you can continue to use Visual Studio and Visual Studio Build Tools together for free, just like before.
That just confirms the parent comment's point. If you're just using the build tools directly, you're fine. If need to develop "with Visual Studio" i.e. the IDE, not just the command line tools, then you need the paid license.
It's actually not. It's complicated, but they're explicitly allowing Build Tools to be used to compile open source dependencies of closed source projects that do not need the MSVC toolchain for proprietary components.
It's why the example they give in the article is a Node.js application with native open source dependencies (e.g. sqlite3).
EDIT: it's clearer when read in context of the opening paragraph:
> Visual Studio Build Tools (VSBT) can now be used for compiling open-source C++ dependencies from source without requiring a Visual Studio license, even when you are working for an enterprise on a commercial or closed-source project.
I wish the post was clearer (though I'm not sure what that looks like). I've made the same mistake interpreting it, then had to go back and reread it a few times.
Well, let's say this is the world view of all companies about open-source software. Then what happens. If people "tend to not give crap" about licenses, all the nice guarantees of GPL etc also disappear.
GPL was made in response to restrictive commercial licensing. Yes is uses the same legal document (a license): but is made in response!
So is propriety seizes to exist, then it's not a problem GPL also seizes to exist.
Also: it's quite obvious to me that IP-law nowadays too much. It may have been a good idea at first, but now it's a monster (and people seem to die because of it: Aaron Swartz and Suchir Balaji come to mind).
There are zero guarantees and commercial software uses GPLd software as parts of their products all the time. Licenses do not work and you shouldn't respect them whenever you can.
But they also have over a billion cash at hand. I imagine at that scale and customers being private, the amount is pretty stable and Starbucks can just do whatever with this since it's extremely unlikely that customers demand all their money back at once.
I mean, once Starbucks have it, then the customers get it back via product (that has a margin included), or just leave it forever (free money!)
I have a firm "No vouchers" rule because of this, the vouchers in my part of the world inexplicably "expire" if not used within a certain amount of time, cannot be redeemed for cash, and will not be honoured if the business goes belly up
According the laws here they have to. Doesn't mean they won't make it difficult. And it needs to be in a separate account and business (to avoid it being drawn into a bankruptcy). Not that this has ever stopped businesses from abusing it anyway. I doubt this voucher option is available in the Dutch app because of this but I didn't bother to check.
This is typical CDU conservative talk from Merz. He is on a war path with anything Angela Merkel did because she saw him as politically inexperienced and shunned him. So he had to work for Blackrock.
The CSU (the Bavarian equivalent and permanent coalition partner of the CDU) is also demanding to reactivate nuclear power plants but at the same time is not willing to store any spent nuclear fuel. The CSU is also notoriously anti renewables and does not want new power lines in their "beautiful scenery" to get the renewable power from northern Germany to Bavaria.
Hard to understate just how expensive. Here in Montreal where ice storms kill and cause billions in damage, we still don't bury the main transmission lines. We been burying almost everything _in_ the city where having to repair millions of individual connections (again) would be impractical, but it's relatively simple to repair the limited major lines into the city.
From CBC:
> Current estimates are that it would cost five to 10 times more to distribute electricity to a big city via underground cables, and that not all of nature's problems would be alleviated even if that were done.
> Horizontal directional drilling costs $10 to $30 (USD) per linear foot with upfront fixed costs of $30K or so.
Underground power lines are expensive, but not that expensive. As far as I know, you dig a ditch, put the power line into it, and then put the material back in over the top.
To pass under roads, under rivers, avoid digging up tarmac, houses, orchards, crops, things on the surface, etc.
Trenching is straightforward, I mention horizontal directional drilling as that puts a cap on the total cost of going underground Vs pylons and above ground stringing.
I don't know what you are doing but I have my Arch Linux running since about 2013. I needed to intervene a few times, I think 4 times in total but the base installation in from 2013, now nearly 13 years ago.
I share the same sentiment. I've had the same Arch install running since ~2016 and have been using Arch since about 2013 and the number of times I've needed to chroot from a live image is under 10 and were mostly related to systemd breaking things during an update which is pretty much entirely no longer an issue these days.
Compared to Windows-land where nuking and reinstalling the entire OS is a routine maintenance task, checking arch news to see if there's any manual intervention and running `pacman -Syu` is all I really ever think about.
I think this is a very interesting observation, because my experience has been fairly opposite. Disclaimer, I've grown up with windows.
Yet I've never had to reinstall windows on any of my devices ever. I've never had things behave in unusual or unpredictable ways.
Meanwhile, a highly suggested utility (on reddit, SE/SO, and even a few distro forums) for touchpad gestures borked my gnome setup. (Uninstalling it, as you might have guessed from my story and tone, did diddly squat.)
Just today I manually flushed my dnf packages (or clear them? Not sure of the terminology.) In the past, I had to debug manually because apparently the default timeout for Fedora was causing timeout issues with a few 100ms internet latency. That was a fun rabit hole "why can't I install an app that's only available via dnf install" "Oh, because Fedora assumes you have good internet. But don't worry if you have Ubuntu, because that doesn't have these issues!".
...I've never even been made aware what download timeouts windows has. As it should be for a user.
I could go on and on. My windows partition goes nearly months without sleep, typically only rebooting if I run out of battery or want to install an update. Linux... doesn't have hibernate yet. Fortunately it doesn't matter! ...Because some odd memory leak (and gpu driver stuff perhaps?) forces me to shut down ever so often. Oh well.
I'm not sure where you get the idea that Linux doesn't have hibernate - there's both userspace systemd-hibernate bindings and also the kernel swsusp which both work equally well (although you may need to make sure you have a large enough swap partition for it to function)
Also the other issues you're describing do sound frustrating but I think it's a byproduct of an entirely different culture. Exposing user-configurable timeouts and you being made aware of it during troubleshooting is something that enables you to deeply understand your system and how it's configured. In Windows, even if there is defaults for things like that it likely is not exposed to the user or configurable at all. If the default settings are bad you're just stuck with it and you aren't expected or intended to modify anything to better suit your needs.
My experience with Arch is mostly due to having been a fairly proficient Linux user prior to switching over and being very comfortable reading the wiki or bbs and tinkering to find solutions to things. A lot of the prior experiences I had with Debian or other "friendly" distros kind of put me through the ringer too and I've found that having the rolling release with Arch fits my preferred workflow much better than something like Ubuntu or Debian or Fedora or the other "batteries included" distros.
That's pretty good, I'm jealous! The last time I reinstalled my OS (Slackware) from scratch was 2009, but I run into serious problems every couple of years when upgrading it to 'Slackware64-current' pre-release, because Slackware's package manager doesn't track dependencies and you can just install stuff in the wrong order: I usually don't upgrade the whole OS at once... just have to fix any .so link errors (I've got a script to pull old libraries from btrfs snapshots). I've even ended up without a working libc more than once! When you can't run any program it sure is useful that you can upgrade everything aside from the kernel without rebooting!
Sometimes it's not about doing nothing but only being allowed to do the same stuff over and over again to do because there is "no budget" to rewrite the codebase to automate the process.
I expected one person to step up, do the verification, and F-Droid can use that signing key to distribute apps to phones with facism mode enabled. They just need to pick an app ID that isn't already in use, could even be sequential under org.fdroid.*
It's quite scary that there's no such idea being floated in the post. Apparently they're ready for F-Droid to be relegated to the realms of Google-free devices that nobody, outside of a few hardcore privacy activists, is currently willing to use. Maybe that'll change, but I doubt significantly enough for governments to reconsider which OSes and third-party stores they need to support
You also cannot get a tax refund or pay taxes without a bank account.
reply