> At a pragmatic level, can't you say, hey here's something thats probably nothing, let's scan it again in 6 months
If a doctor even _hints_ there might be cancer, the patient will have a terrible 6 months (with actual, measurable negative health impacts of the added stress). Also, at some uncertainy-level (say, 10% chance of cancer) the doctor _has_ to say something and has to schedule expensive followups to not risk liability, even though in 90% of the cases it is not only unnecessary, but actively harmful to the patients.
When, on average, the cost of the screening + the harm done by a false positive outweighs the benefits of an early detection, you shouldn't do the screening in the first place.
At $enterprise, we were just looking for a proper term that sets "responsible vibing" apart from "YOLO vibe coding". We landed on "agent assisted coding".
It's a bit more technical. And it has a three-letter acronym. Gotta have a three letter acronym.
Yes, please don't push "vibe engineering" to mean how you defined it in your blog post. To me, it means exactly the opposite.
I see "vibe" as pejorative. Adding "engineering" does not elevate it from "vibe coding", as I think is your intention in the post, it just shifts "vibe" term to a different domain.
To me, "vibe engineering" means using LLM to develop "design" with no care as to its validity just like "vibe coding" means for "code".
"Agentic xyz" or "Agent assisted xyz" is more fitting.
FWIW, I do not see "vibe" as always pejorative, rather it depends on goals. When quick results and not long term quality matter, "vibing" is a legit tactic.
Anyways, just my interpretations. Please, keep up the good work. Remember, the two hardest things in software are naming, cache invalidation and off-by-one errors. It's good you continue to tackle the zeroth one.
I really like "agent assisted coding". I think the word "vibe" is gonna always swing in a yolo direction, so having different words is helpful for differentiating fundamentally different applications of the same agentic coding tools.
I've used this in the past for collaborative diagramming sessions and love its ease and simplicity, but the point of Mermaid is its portability - ie. can be embedded in Markdown docs and viewed in various editors/platforms.
Thanks @maho! We're hoping to keep the improvements flowing. I'm non-technical but from my perspective I thought Mermaid sequence diagram functionality really shines! Would love to fill the gap in my knowledge. What is better about https://sequencediagram.org/ than Mermaid sequence diagrams?
It's mostly broadband noise that can be simulated by simpler methods, but visualizing possible resonance patterns for the low-frequency emissions from the compressor (which typically runs at 20Hz, 40Hz, ..., 120 Hz) would be good to know.
Although I am not sure how the 2d simulation result carries over to the 3d world...
I really like nools, which is a drools clone, but for JavaScript. It's fantastic for quick hacks and for getting to know how to write code for rule engines.
While Darwin can generate code for you, I think generating new projects from scratch is already being done by a lot of the major frameworks. Check out the native docs/tooling around the kind of stack you'd like to build!
My general answer: any tool that has a large online community. I’m not the dev, but any LLM-backed responses will naturally depend on that LLM’s familiarity with the tech. With that in mind, RubyOnRails and create-react-app (+ some node backend) seem like the natural winners.
This is insane. The project includes the hardware (GHz-capable RF-generation and measurement), firmware (FPGA) and Software (a cpp GUI). Surely that can't be all from one person?
It's under development close to 4 years. If the person has time (or doing Ph.D. or something on the subject), this is a very viable time frame to do all of this.
There are other people's work in it, but it's a one man show mainly.
It's not that hard (not that it's easy either!). The individual parts are conceptually relatively simple, the devil is all in the details. For a generalist this is a doable, but likely very time consuming project. I've done something similar (fairly different focus in purpose and specs, but the overall shape and scope is not much off) professionally, mostly on my own.
Is there a way I can give feedback on wrong labels? The easy questions seem to be correct most (all?) of the time, but I noticed a few errors in the labelling of the complex question/answers. I would love to see this improve even further!
If memory serves correctly, Aldi Nord was one of the last big supermarket chains in Germany to introduce scanners at the registers (2003?), because their existing system was simply faster: Each item had a three-digit code, and all cashiers knew all codes by heart.
It was a race between me placing items on the conveyor belt and them ringing the items up. Oh the embarrassment when they told me the total as I was placing the last item on the conveyor belt.
If a doctor even _hints_ there might be cancer, the patient will have a terrible 6 months (with actual, measurable negative health impacts of the added stress). Also, at some uncertainy-level (say, 10% chance of cancer) the doctor _has_ to say something and has to schedule expensive followups to not risk liability, even though in 90% of the cases it is not only unnecessary, but actively harmful to the patients.
When, on average, the cost of the screening + the harm done by a false positive outweighs the benefits of an early detection, you shouldn't do the screening in the first place.