Hacker Newsnew | past | comments | ask | show | jobs | submit | unconed's commentslogin

>Despite the quick spread of agentic coding, institutional inertia, affordability, and limitations in human neuroplasticity were barriers to universal adoption of the new technology.

Blaming lack of adoption purely on regressive factors follows the same frame that AI firms set. It isn't very effective satire for that reason.

It couldn't be that there is something essential and elementary that is wrong with the output, no... all these experienced experts are just troglodites and wrong and we should instead tag along with the people who offloaded the parts of their work they found tough to a machine the first chance they got.

There's no such thing as ape coding. There's still just coding, and vibe coding.


>He died in his sleep last month

Title is misleading.

>Wizardchan, a smaller and misogynistic forum for male virgins

I see the media has learned exactly nothing since the days any of this was relevant. Just apply the usual adjectives until consensus is achieved.

There's nothing more credible than sockpuppetting a dead person to renounce anything related, amirite?


> > He died in his sleep last month

> Title is misleading.

So enlighten us.


The person this thread is about publicly outspoke and critized Watkins, 8chan and other forums that promote hate speech. Multiple times, in newspapers and documentaries about the topic(s).

He went in business with the wrong people, and they followed him years after that, with bogus lawsuits, showing up at his door step to intimidate him, threats etc.

Personally, I wanted to say that he was a person that strongly believed in the idea of open debate and cultural exchange. So much so that it was too late for him to see what the Watkins family and their qanon/pol/whatever movement were planning to do with his boards.


The news article isn't an obituary, it's gravedancing. Hope that enlightened your royal majesty.

>Before you get your pitchforks out and call me an AI luddite, I use LLMs pretty extensively for work.

Chicken.

Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?


No, we truly live in a post-shame society and that's definitely not a good thing. Shame is (or was) an important tool to enforce social norms and the acceptance of AI slop (both writing and code) is only the latest where a sufficiently large percentage of people think anything goes that it feels literally pointless to speak up at times.


Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.

The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).

When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".

All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.

Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident. https://observablehq.com/d/a51954c61a72e1ef


"In today’s article, we’ll build a rudimentary blur algorithm and then pick it apart."

Emphasis mine. Quote from the beginning of the article.

This isn't meant to be a textbook about blurring algorithms. It was supposed to be a demonstration of how what may seem destroyed to a causal viewer is recoverable by a simple process, intended to give the viewer some intuition that maybe blurring isn't such a good information destroyer after all.

Your post kind of comes off like criticizing someone for showing how easy it is to crack a Caesar cipher for not using AES-256. But the whole point was to be accessible, and to introduce the idea that just because it looks unreadable doesn't mean it's not very easy to recover. No, it's not a mistake to be using the Caesar cipher for the initial introduction. Or a dead-simple one-dimensional blurring algorithm.


Using a caesar cypher as an intro without explaining the pro tool and framing the educational context properly is just shit pedagogy bro.

Go look up what a z-transform is, and begone.


Oh, I see. You're just an asshole.

My apologies for extending you the benefit of the doubt and distressing you thereby.


If you have an endless pattern of ..., -1, 1, -1, 1, -1, 1, ... and run box blur with a window of 2 or 4, you get ..., 0, 0, 0, 0, 0, 0, ... too.

Other than that, you're not wrong about theoretical Gaussian filters with infinite windows over infinite data, but this has little to do with the scenario in the article. That's about the information that leaks when you have a finite window with a discrete step and start at a well-defined boundary.


A binomial is exactly equal to a repeated 2 sample box blur yes. That's exactly how you construct pascal's triangle.

For filter sizes > 2, box blurs are ass.


Interesting...I've used moving averages not thinking too hard about the underlying implications. Do you recommend any particular book or resource on DSP basics for the average programmer?


> Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.

FWIW, this does not read as constructive.


It also makes no sense to me, and I also have a DSP degree. Of course moving averages (aka box blurs) filter out higher frequencies more than middle frequencies.


Homework assignment: make a bode plot of the convolution filters [1 1 1] vs [1 2 1].

Which one turns +1, -1, +1, -1, .. into all zeroes?

You ought to know this because the fourier transform of [1 0 1] is a cosine of amplitude 2 on the complex unit circle e^(i*omega), which means the DC quefrency needs to be 2 to get the zeroes to end up at nyquist.

The frequency response H(z) (= H(e^i*omega)) of [1 1 1] on the other hand will have its minimum somewhere in the middle.

Also here's a post that will teach you how to sight read the frequency response of symmetric FIR filters off the coefficients: https://acko.net/blog/stable-fiddusion/


The degree to which people defend poor scholarship and writing on HN these days is frankly pathetic.

There is nothing about that intro that is offensive. Reading comprehension ought to tell you that "pun intended" is a joke to make the bitter pill that OP wrote garbage easier to swallow.


It's bizarre how casually some people hate on Musk. Are people still not over him buying Twitter and firing all the dead weight?

_Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.


> Are people still not over him buying Twitter and firing all the dead weight?

You think that's really the issue? Or are you not making a good faith comment yourself?

I cannot remember the last time I saw someone hating on Elon for his Twitter personnel decisions. The vast majority of the time it is the nazi salutes he did on live TV and then secondary to that his inflammatory behavior online (e.g. calling the submarine guy a pedo).


I still pick on it, but I was never a big Twitter user, I just enjoy calling it Xitter. Picking on Elon Musk is for the shitty things he's been doing to our government and the world, and for being a bad person in general.


cough

The Strange Case of "Engineers" Who Use AI

I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.

I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.

I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).

“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.

I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).

Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].

I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.


1) A system that needs _seconds per tile_ is not suitable for real-time anything imo.

The irony is that you explicitly posited your thing as a successor to Perlin noise when in fact, it's just a system that hallucinates detail on top of Perlin (feature) noise. This is dishonest paper bait and the kind of AI hubris that will piss off veterans in the scene.

2) I'm also disappointed that nowhere is there any mention of Rune Johansen's LayerGen which is the pre-AI tech that is the real precedent here.

Every time I see a paper from someone trying to apply AI to classic graphics tech, it seems they haven't done the proper literature study and just cite other AI papers. It seems they also haven't talked to anyone who knows the literature either. https://runevision.com/tech/layerprocgen/

3) >The top level input is perlin noise because it is genuinely the best tool for generating terrain at continental scale

This is a non-sense statement. I don't know what you are thinking here at all, except maybe that you are mistakenly using "Perlin" as a group noun for an entire style of functions.

Perlin has all sorts of well-known issues, from the overall "sameyness" (due to the mandatory zero-crossings and consistent grid size) as well as the vertical symmetry which fails to mimic erosion. Using it as the input to a feature vector isn't going to change that at all.

The idea of using plate tectonics is much better, but, vastly _different_ from what you have done. And btw, every plate tectonics simulation that I've seen does not look convincing. If you treat it as a simple transport problem, the result just looks like a Civilization 1 map. But if you want to treat it seriously, then the tectonics have to be the source of all your elevation changes, and not just some AI hallucination on top afterwards. The features would all have to make sense.

Your abstract states that classic terrains are "fundamentally limited in coherence"... but even to my non-geologist eye, your generated heightmaps seem incredibly blobby and uncanny. This makes me think that a real geologist would immediately spot all sorts of things that don't make any sense. For example, if you added water and rivers to the terrain, would it work, or would you end up with non-sense loops and Escher-like watersheds?

(mostly I'm disappointed that the level of expertise in AI tech is so low that all these things have to be pointed out instead of being things you already knew)


> And btw, every plate tectonics simulation that I've seen does not look convincing.

It's an amazing problem! I haven't spent much time on it - maybe 20-30 hours spread out over several years - but it _is_ something I come back to from time to time. And it usually ends up with me sitting there, staring at my laptop screen, thinking, "but what if I... no, crap. Or if we... well... no..."

TBH it's one of the things that excites me, because it makes it clear how far we still have to go in terms of figuring out these planet-scale physical processes, simulating them, deriving any meaningful conclusions, etc. Still so much to learn!


The solution to seeing more Bret Victor-ish tooling is for people to rediscover how to build the kind of apps that were commonplace on the desktop but which have become a very rare art in the cloud era.

Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.


Recently I tried to hack in a feature into Transmission for Mac. All I wanted to do was add a single checkbox per torrent, which corresponded to a property in the libtransmission back-end, but which isn't exposed.

And sorry, but, it was a complete mess from start to finish. Instead of just mapping a boolean value to a state, the entire read and write path was this elaborate game of telephone. In React I would just use something like a cursor to traverse and mutate state immutably, and the rendering part would take care of itself. There was also a bunch of extra code to remember and apply defaults, which in a more functional system like React is generally managed via composition.

One of the article's claims is that the React model is suboptimal because UIs are more stable than it assumes. But this isn't true because the edge cases is what you will end up spending the most dev time on.

A declarative approach lets you achieve N features in mostly O(n) lines of code. When you do things imperatively, you're instead having to orchestrate up to O(n^2) state transitions in O(n^2) lines of code.

The React model is also not that different from immediate mode, which is very popular in games, where performance is important. The main difference is that React has an answer to what happens when you can't fit all the work into one rendering cycle, via memoization and sparse updates.

This gets you similar perf to classic retained mode, but without all the tedious MVC plumbing.

PS: Here's how i use patching as a basis for state management, https://usegpu.live/docs/reference-live-@use-gpu-state


Changes in mood result from changes in behavior.

If you don't feel like doing something you know is good for you, do it anyway. You'll feel better afterwards.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: