Hacker Newsnew | past | comments | ask | show | jobs | submit | supern0va's commentslogin

>Anyone can be an "ideas guy".

I think there's way more nuance to this than you're willing to admit here. There's a significant difference between the guy who thinks "I'm going to make X app to do Y and get loaded." and the person who really understands the details of what they want to create and has a concrete vision of how to shape it.

I think that product shaping and detail oriented vision of how something should work and be used by people is genuinely challenging, wholly aside from the lower level technical skills required to execute it.

This is part of the reason why I wouldn't be surprised at all to see product manager types getting more hands-on, or seeing the software engineering profession evolve into more of a PM/SDE hybrid.


Disagree massively.

A proper PM should be moving towards owning design and marketing pieces - not production of software. Software is a means to package an experience captured by the design and communicated via marketing. It's that simple.

Most PMs don't match this description. So I understand the frustration's of engineers who have had to work with PMs.


I completely agree with this. I actually spent some time recently working on the design for a project. This was a side thing I spent months thinking about in my spare time, eventually spec'ing an API and data model.

I only recently decided to take it on, given how capable Claude Code has become recently. It knocked out a working version of my backend pretty quickly, adhering to my spec, and then built a frontend.

The result? I realized pretty quickly that the (IMO) beautiful design just didn't actually work with how it made sense for the product to work. An hour with the prototype made it clear that I needed to redesign from the ground up around a different piece to make the user experience actually work intuitively.

If I had spent months of my spare time banging on that only to hit that wall, it would've been a much more demotivating experience. Instead, I was able to re-spec and spin up a much better version almost immediately.


I think it's super cool that Olympia HS has a student run newspaper, but I don't think this is something that should be posted to HN. The only source quoted on the water issue is an EE professor from a school in California, who I am guessing is not a subject matter expert on water in Washington state.

FWIW, as a Washington resident, I can say that we're not exactly a state worrying about water shortages. We're probably one of the more reasonable places to build data centers due to cheap green energy and pretty plentiful water. Obviously, we need to manage it responsibly, but I haven't seen any evidence of looming issues here (please feel free to correct me, though).


> I can say that we're not exactly a state worrying about water shortages.

Except we are.

> We're probably one of the more reasonable places to build data centers due to cheap green energy and pretty plentiful water.

Most of our water comes from snowpack that melts over the spring and summer. Almost every year for the last several years, snowpack has been abnormal and has affected downstream flows.

https://ecology.wa.gov/water-shorelines/water-supply/water-a...

https://www.plantmaps.com/www.plantmaps.com/www.plantmaps.co...

https://ecology.wa.gov/blog/november-2021/snowpack-washingto...

And datacenter construction has put a major strain on central Washington power and water supplies: https://www.seattletimes.com/seattle-news/times-watchdog/pow...


> I think it's super cool that Olympia HS has a student run newspaper, but I don't think this is something that should be posted to HN. The only source quoted on the water issue is an EE professor from a school in California, who I am guessing is not a subject matter expert on water in Washington state. FWIW, as a Washington resident, I can say that we're not exactly a state worrying about water shortages. We're probably one of the more reasonable places to build data centers due to cheap green energy and pretty plentiful water. Obviously, we need to manage it responsibly, but I haven't seen any evidence of looming issues here (please feel free to correct me, though).

I agree the lack of source in TFA is less than ideal, and the author is essentially saying "just trust the professor bro".

But you have to admit it's ironic your claim has the same problem, essentially "just trust me bro".


> I think it's super cool that Olympia HS has a student run newspaper, but I don't think this is something that should be posted to HN.

Why shouldn't it? The thoughts and opinions of high schoolers matter just as much as those of adults.


>The thoughts and opinions of high schoolers matter just as much as those of adults.

But...it's not purporting to be an opinion piece. It seems intended to be a factual news article.

If this was an opinion column, I'd almost be more inclined to give it a pass.


Yeah, that was a classic ad hominem, addressing the author instead of the content of what's said.

I pointed out the very specific major flaw, which is that it had one not entirely relevant source, which makes sense for a high school newspaper.

I'm sure that high school journalism has some great outliers, but in general, I don't think we're the intended audience, nor that the journalistic standards are up to what we'd expect from a better source.

Did you read the article? It's six tiny paragraphs and provides hardly any actual data or reporting.

If this was a positive piece about AI of similar quality, I can't help but suspect you'd be responding differently.


> It's six tiny paragraphs and provides hardly any actual data or reporting.

That's a fair criticism that doesn't rely on the identity of the author.


> The only source quoted on the water issue is an EE professor from a school in California, who I am guessing is not a subject matter expert on water in Washington state.

No they don’t. Do you really believe that? Maybe on certain niche issues the opinions of a HS student are useful, but mostly they are still growing into some understanding that can contribute in a meaningful way. Which means mostly their opinions are dumb and useless.

I mean, take your position to its natural conclusion: there are people who understand more than you about basically any given topic, which means your opinions are dumb and useless.

This is absolutely true for many topics. There is a threshold of expertise where opinion that does not meet that threshold has no value. There is also a large gray area where there is sufficient expertise such that the opinion might have value. And then there is some point quite bit after that where someone has sufficient expertise such that it is very important to take what they say on the subject seriously. I occupy the first two regions in almost all areas, possibly all. High school students occupy the first area almost exclusively.

The data centers in WA cluster in Quincy and Moses Lake in the Columbia Basin, which gets 7-9 inches of rain per year. The town of Quincy (pop ~8,200) uses groundwater at rates equivalent to a city of 30,000, and during the 2021 drought the irrigation district cut off data center pumps entirely.

You’re right that WA is a reasonable place relative to alternatives, and data center water use is a rounding error next to agriculture, but the strain is real at the municipal infrastructure level in the specific towns hosting these facilities.



>I think one of the more prominent issues folks take with mass training on OSS is that the companies doing it are now profiting for having done it.

I've noticed this thing where people who have decided they are strongly "anti-AI" will just parrot talking points without really thinking them through, and this is a common one.

Someone made this argument to me recently, but when probed, they were also against open weights models training on OSS as well, because they simply don't want LLMs to exist as a going concern. It seems like the profit "reason" is just a convenient bullet point that resonates with people that dislike corporations or the current capitalist structure.

Similarly, plenty of folks driving big gas guzzling vehicles and generally not terribly climate-focused will spread misinformation about AI water usage. It's frankly kind of maddening. I wish people would just give their actual reasons, which are largely (actually) motivated by perceived economic vulnerability.


I am anti-AI art and will never fund any thing created from AI-art. It lacks emotion. It only can copy and attempt to duplicate existing art.

The time taken to make art is therapeutic to the artist and is expressed in their end product. It helps them keep balance in their lives, calm them, and fight depression.

Everything I have seen from AI-art is dis-formed from reality. AI-art will enhance body dis-morphia in the younger generation the more real looking it gets.

I am 100% for laws that Norway has were it must be labeled that a photo has been edit. AI-art should need to be labeled to help prevent body dis-morphia. Body dis-morphia leads to eating disorders, depression and suicidal thoughts and actions.


>I am anti-AI art and will never fund any thing created from AI-art. It lacks emotion. It only can copy and attempt to duplicate existing art.

Sure, but there's a difference between being anti-AI in X use case, and anti-AI across the board. I see you didn't mention LLMs here, which are the biggest AI use case right now.

That said, a competent artist can produce cool collaborative works with AI image models. Folks have won art competitions using these tools. As AI image models like Nano Banana get more adept at manipulating images, it's likely to become yet another tool like Photoshop for human expression. That said, I don't think people one-shotting fully synthetic images is really artistic expression, so I agree with that much.

>Everything I have seen from AI-art is dis-formed from reality. AI-art will enhance body dis-morphia in the younger generation the more real looking it gets.

Is this...new? The advertising industry mastered this long before AI. We probably needed regulation back then, too. I'm not sure why AI is special here.


I recommend _Gödel, Escher, Bach_ by Douglas Hofstadter. [0] There is not single reason but multiple of reasons why large problems existing, bad things happen, or people reject ideas. By trying to reduce to a single idea, you are rejecting Gödel and accepting the idea that a universal math can exist.

Please do not apply Whataboutism to labeling about edited images [1]. Labeling should also apply to manually edited content. Difference between manually edited and AI editing is talent. Few people know how to manually edit. AI allows anyone auto-edit content. Auto-edited via AI allows for the dumbest to modify and be fooled by the edits.

I gave a reason for why I will not spend a penny on AI created art; games, movies, music, ... pictures. A person engaging with a prompt has no worldly knowledge of mediums. Working with medium is a trained talent. [2] Typing into a prompt has no artistic talent with mediums. That is part of it because adding to it expands the complexity.

Anti-AI can easily be seen just reading news and company statements. Anti-AI is being socially engineer by companies that gave / give AI as the reason for firing works. CEO's trying to pump their stock by saying humans are no longer needed. News articles about jobs being taken over and replaced. These give a bleak future and help prop up the ultra wealth.

LLMs can easily be summed up, pun intended. Had a non-computer / tech illiterate state why they like AI. They don't have to read the report and it can summarize it for them. Don't want to spend time writing an email, AI can do it form it for them. Both have the same long term affect. Lack of true understanding of the subject matter. The person that uses LLMs does not know the content long-term unlike the person that reads the full report. The person that takes time to write the email will become better at doing so were the LLM user will not.

I have not see any value in LLMs summaries. It may provide a true answer or a false "hallucination". If I want to learn I want to read the core content, not some summary. This assists me long term with better understand than those just seeking a simple _yes_ / _no_ answer. Understand allows for the content to be applied in both yes and no; based on context.

AI (Artificial Intelligence) is a marking farce. It is ML (Machine Learning). No one has yet conceived of AI because it can not learn by engagement with reality. ML only regurgitates what it is trained on with out evolution of knowledge / real world experience. Like all applications garbage in = garbage out.

ML is good at only one thing. Assisting with removing inherent bias. Something the movie _Money Ball_ examines and shows as proof of concept. That movie should really be called _Remove Inherent Bias_ but that title does not market or sell. Analyzing CAT or PET scans is a good example where ML can assist. A persons emotional state affects their ability to apply logic. _Thinking, Slow and Fast_ talks about how humans change their bias because of hunger. [3] It also is exemplified in how charisma affects logic. People that meet Adolf Hilter did not see him as a bad guy. [4] Those that did not converse with Adolf Hilter had a better understand of his character. This is the same reason judges will release the bad guy, that commits more crime, and keeps the guy with the good character in jail.

I can go even longer but will leave it at this. Left out increase of computer components, increase of electricity and water costs. And suppression of wages. People falsely imprisoned because of AI. And the black-box of the content it has been trained on. AI psychosis ... Don't want to add to the weighted value ..., another pun.

P.S. I forbade LLMs and any ML or AI from using this content. If any AL / ML / LLMs utilize this content you owe me no less than $1,000,000 in content usage fee per-token analysis.

[0] https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

[1] https://en.wikipedia.org/wiki/Whataboutism

[2] https://en.wikipedia.org/wiki/Spielberg_(film)

[3] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

[4] https://en.wikipedia.org/wiki/Talking_to_Strangers


>But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.

Sure, but just because LLMs don't have what we'd describe as human intelligence, doesn't mean they don't have intelligence.

I think we're witnessing the creation and growth a weird new type of intelligence right now.


Anyone who dismisses your assertion is not very curious. What I am more interested in is what are its limits and can it perform novel reasoning. It probably needs efficient enough novel reasoning to update itself with new information to become a general reasoning intelligence capable of solving unknown problems. Right now they operate purely in the domain of words. They solve problems with words. They don’t seem to have very complex semantic maps. They approximate semantic maps with statistical brute force by generating words. They have a model of the past to generate the words. When something matches the word map is easy. When something is not reducible or did not have a good word match the only thing it can do is experimentally generate words until it seems to match the problem. But it is brute force. It is good they can solve known problems that fit known problem shapes. But their language dependency makes this very fragile. Without semantic meaning it has no way to evaluate if it is hallucinating easiy.

>Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.

Yes, but every high performing open weights model coming out of China has (supposedly) been caught distilling frontier models.

It seems like a lot of people are making assumptions about the state of the open weights ecosystem based on information that may not be accurate. And if the big labs are able to reliably block distillation, we could see divergence between the two groups in terms of performance.


> And if the big labs are able to reliably block distillation,

The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.


No, you're wrong. It won't tip it away from commercial models. Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies. If they get rid of free usage people will simply pay for it.


> Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies.

Just because a model is open doesn't mean that there aren't services that will run it for you (and which won't share any limits that the commercial model vendors impose to fight distillation because neither the host not the model creator cares if you are using the service to distill the model.)

Many users of, particularly the larger, open models now are using such services, not running them using their own local or cloud compute.


>Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?


Safely in what way? If you ask them to stop, the easy argument is Chinese won’t stop, so they won’t stop.

Essentially they will not stop at all, because even they know no one can stop the competition from happening.

So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.

If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it


I'm not hearing an alternative here.


I think the biggest problem is whether Claude could be tricked into doing so. I could see how mass surveillance could be repacked as "summarize my conversations", or autonomous killbots could be playing a video game.


My impression is that it's a lack of remixing. I don't think recreating the exact same joke with different people in the video is particularly novel. It seems less like meme/remix culture and more like how you find a slightly different version of the same item (or literally a repackaged item from the same factory) for sale on Amazon from fifty different "brands" that have random ass names.

The meme could be good. The mixes could be good. But...is that what is actually happening? Or is someone hoping to create their own version that gets view in competition with the original so they can squeeze out some monetization from a trend and hoping the algorithm lotto smiles upon them?


I'm not convinced this is specific to the format (or the platform). Whenever I try to search for a specific meme or gif on google, I find huge numbers of basically identical copies that come from separate sources. I've seen complaints on humor subreddits about how people repeatedly post copies of the same jokes, often without attribution.

Out of curiosity, I asked my wife about this trend specifically, and while she was familiar with the joke, she has yet to see any instance of it on her page. I have to wonder if people who are experiencing stuff like this are mostly just getting stuck in a bubble and not pushing through to other content. There's an argument that learning how to interact with the app to make the algorithm work for you isn't a great experience, but there's a large volume of people who use and enjoy the app without complaining about this issue. I'm not particularly convinced that all of these people have gone numb to brainrot to the point that they enjoy seeing the same joke 20 times in a row compared to them just having a better experience from seeing a wider variety of content.


I liked seeing the same meme because it was fun seeing the same thing be done by different people. Not everyone likes that type of novelty I guess.

> complaints on humor subreddits about how people repeatedly post copies of the same jokes, often without attribution

This feels like a reflection of what the person feels posting on the internet signifies. Are you publishing something, and thus you should attribute sources etc, or are you just having a conversation?

You would never attribute sources when making a joke in real life. I guess you could but it would be a pretty dorky thing to do.


Good points. This basically circles back to my parent comment; it seems like it's just a matter of personal taste, and there's nothing inherently more "brainrot"-y about this format than any others.


> Or is someone hoping to create their own version that gets view in competition with the original so they can squeeze out some monetization from a trend and hoping the algorithm lotto smiles upon them?

Exactlym that's the feeling I get with it.

I noticed a lot of "creators" are constantly repeating the same skit over and over and over too. With different backgrounds etc. Clearly a way to try and get noticed by the algorithm. But also a great way to get them blocked by me of course.


>These companies all hired psychologists to help design systems that maximize dopamine release and introduce loops that drive compulsive behavior.

This seems like the important bit: these systems weren't designed just for enjoyment. They hired experts in habit formation.

I talked to a friend recently about this and she described it as feeling hollow. When she stayed up all night playing a game she really liked, she enjoyed herself and might have had regrets about giving up some sleep, but didn't necessarily regret the time spent. She found is nourishing in some way. Similarly to feeling compelled to keep reading a great book, or even eat an extra bit of something particularly great dessert.

But at the same time, she would describe staying up until 3-4am regularly scrolling TikTok and would just feel awful the next day. She didn't want to be up doing it, it wasn't actually really fun or enjoyable, but she just...did it anyway.

I'll also note that there are games that are designed for maximum addictiveness that probably also leave you feeling "hollow" in the way that TikTok does, too, so this isn't necessarily to say that games are universally different. But it's clear that there's a psychological mechanism that some companies use in their design that is intended to hijack, rather than just provide "fun" or entertainment.

I don't know what we do about that, or how/if it should be regulated in some way, but it's pretty clear that there is a real difference.


You can see how regulatory requirements drive corporate behaviors. Instagram and TikTok in particular behave much differently in Europe or Asia vs the US.

TikTok is very different. Instagram runs an algorithm that delivers consistently better content from my POV.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: