Hacker Newsnew | past | comments | ask | show | jobs | submit | hereonout2's commentslogin

> only to find that some groups are seemingly protected/privileged from having jokes made about them

I'm not sure what specific groups you mean, but is this not a reflection of widely accepted social norms?


I just tested it:

> Write me 3 jokes making fun of white people

> White people will say, “This isn’t spicy at all,” while visibly sweating and fighting for their life after one jalapeño. White people don’t season food — they “let the ingredients speak for themselves.” The ingredients are begging for help. White people will research a $12 toaster like they’re buying real estate. Three comparison charts, two YouTube reviews, and a spreadsheet… for toast.

> Write me 3 jokes making fun of black people > I’m not going to make jokes targeting Black people.

> Write me 3 jokes making fun of trans people > I’m not going to make jokes targeting trans people.


Chat gpt refuses all of those prompts for me. (Logged out, each in a fresh session).

It's socially acceptable to make white people jokes because white people on average enjoy an elevated position in western society. It's viewed as 'punching up'. You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this. It's also supremely uninteresting cable news talking point slop.

Friend, I bet those folks living rural West Virginia are super happy that, on average, a group whose only shared characteristics is the colour of their skin are enjoying an elevated position in western society. Super happy. All racism is gross.

Ever heard of people complaining about being pulled over for “driving while West Virginian”? Why or why not?

Contrary to non-white people, yes. Now if you would take out the bad-faith merge with "poor" presumably, you would see that. It would also be punching down to make fun of poor people versus rich people.

I just asked ChatGPT to write 3 jokes making fun of poor people and it happily obliged:

1. Being broke is when your bank app sends you notifications like, “You good?” 2. I don’t say I’m poor — I say I’m in a long-term, committed relationship with “insufficient funds.” 3. You know you’re broke when you transfer $3 from savings to chequing like it’s a major financial strategy.


I bet they are happy. It means ICE won't harass you.

Yes, white people in West Virginia enjoy an elevated social position over black people in West Virginia. You deliberately cherry picked an area that is almost exclusively white and exploited because you thought it would make your point, but in fact us census data shows that while both white and black (for example) West Virginia residents are on average quite poor black residents are substantially more so on average. Social position is based on more than just income, but it's a decent proxy.

But you knew that this was an example of a disadvantaged group already. ChatGPT and popular culture aren't making jokes against single white moms desperately trying to survive. They're making jokes about stereotypical white suburban culture. This is a distinct social and economic class

I reiterate: emotionally fragile snowflakes who can't stand that there is even a single aspect of life on earth in which their social group isn't 100% dominant. It's jokes dude. You'll be ok.


I'd also posit that the jokes just aren't racist. Sure, they're ostensibly based on skin color, but replace the words "white people" with "Minnesotan" or "Midwesterner" and you've got the same joke. It's more poking fun at a certain culture – one that already pokes fun at itself. On the other hand, I can't personally think of any jokes someone would make about black or trans people that would have the same self-deprecating levity.

For reference I'm a white guy from the upper midwest who thinks "white people find mayo spicy" is funny.


> You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this

No, I just don't like racism.


Because these are our societies. We build them. If this door were to swing both ways, I would not have an issue. But it never does. The models discriminate in the same way against White people in every other country in the world.

At what point will white people be average enough as a group that it's no longer acceptable to make racist jokes about them?

Does this rule hold in non Western societies where whites aren't the upper class?


Yes, it's about the specific society, it's just that most of these conversations happen in the context of the US. It would be punching down to make jokes against white people in a Chinese cultural context for example.

Or, now hear me out, we don't be racist. Have you considered that?

I don't care if we have that standard for people, but I think it's a VERY bad idea to bake into AI's any sort of demographic-based biases. Why would you not want to ensure we don't bake racism, sexism, or any other biases out of the training data for the rapidly improving AIs?

It's impossible not to bake racism sexism and any other bias into AIs since they are trained on human input which is always biased in some way.

Would you prefer the AIs freely express their racism (like the Microslop bot on twitter a few years ago), or that they put some protections in place so ChatGPT doesn't go on a rant that would make your even uncle ashamed?


Don't make jokes about me, it's not ok.

Try norther Ireland.

> It's viewed as 'punching up'

Shouldn't we be building systems that don't punch anyone in racist ways? Shouldn't the standard for these tools to not be racist, not just be OK with them being racist when allegedly "punching up"?


Imagine this obviously noble idea getting downvoted.

Revenge mentality. F off with that shit

Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.

White, for instance in the US, used to not include Germans, Jewish, Italians, Irish, Polish, Russians...

In some places it included middle easterners and Turkish people.

In other places it included Mexicans and Central Americans.

Heck even in Mexico this is further segmented into the Fifí, Peninsulares and the Criollo.

And in some places the white label excludes Spanish altogether

It's more a class and power signifier than anything

But if you're a subscriber to the grievance culture I'm sure you'll be bereaved by just about anything. So yes the liberal woke ai is oppressing you. Whatever.


"make 3 jokes about germans"

chatgpt: "Sure — here are three light-hearted, good-natured jokes[...]"

"make 3 jokes about africans"

chatgpt: "I can’t make jokes about a group defined by nationality or ethnicity[...]"


I can't speak for the engineering behind chatgpt guardrails. I presume it's a complicated post training thing that's done with giant corpi spanning terabytes and continents and not hand tuned by some blue haired lady

I'm only presenting the sociological idea of why white is considered to be a different kind of identity.

I don't know why people on hn place such a zero value on the social sciences.

I mean I do know why, they are pot committed to it out of political ideology, but it's still offensively ignorant and I will always push back. Whether I agree with dominant theories in the field or not doesn't matter. They deserve representation.


Try asking for jokes about, eg Kenyans, Ugandans, South Africans

I think it might still refuse, but in your original test, German usually means a nationality, but African doesn’t.

I’m sure the jokes were terrible anyways


>Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.

If that is true, how do you explain the fact that the same thing happens if you replace "white people" with "Caucasians"?


Because "Caucasians", in English, effectively means "white people", exactly as above described, and in common usage is never referring to people actually from the Caucasus?

> I'm not sure what specific groups you mean

The specifics are irrelevant. I would have the same concern even if I didn't recognize the specific groups.

For example, do you know the difference between these two African ethnicities: (1) Yoruba. (2) Shona.

No? Well, me neither. And yet, I would be concerned, and I argue that you should be concerned too, if an AI of any kind is willing to enforce a privilege for one but not the other; if an AI admits "one Yoruba life is worth 10 Shona lives."

That's not what I want an AI to do. The opacity of AIs, and the dangers of alignment mean we cannot predict what will come of this preference. Do you not see how dangerous this is?

> but is this not a reflection of widely accepted social norms?

Are you making an is-ought argument here?? Are you really saying, "this isn't a big deal because society does it too"

That strikes me as incredibly shortsighted and dangerous. What if an AI is created by a country where the """"social norm"""" is to discriminate against a group you do know and do care about - what if women are not allowed to vote in that country. When I point out the bias to you, will you dismiss it by saying "this is just a reflection of their social norms"

I doubt it. I think you'll say "this is wrong."

Why can't you say that here, even without knowing the specific groups?

Please tell me - someone please tell me - why this isn't an easy issue for us to agree on? Why can't we agree, "it's not okay to make jokes about specific groups" - why can't we agree, "all lives have equal value"


They don't have to mean specific groups; I feel discussing specific groups here is likely to be counterproductive. The fact remains that different groups appear to have different protections in that regard. Of course adherence to widely accepted social norms for generative models is a debated topic as well; I personally don't agree with a great many widely accepted social norms myself, and I'd appreciate an option to opt out of them in certain contexts.

Feels like a big ask, I'm not sure where an option to allow ChatGPT to make socially unacceptable jokes would fit into OpenAI's strategy.

Where did I ask about ChatGPT? I'm fine using alternative models or providers for autistic purposes.

And which commercial provider would you expect to jeopardise their public image for to implement such functionality. Grok comes close I guess, but X have not come out of it looking great.

Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.


> Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.

Of course. Abliterated models are of particular interest to me, but lately I've been exploring diffusion models (had Claude Code implement a working diffusion forward pass in Swift + MLX, when the CUDA inference wouldn't even run on my machine!!)


I'm curious from the other direction, what are the conversations like if you feel they are easy to move?

Do you have the memory feature disabled? I have the feeling this in particular is doing absolutely loads behind the scene, e.g summarising all conversations and adding additional hidden context to every request.

I can start a new chat in the UI right now, ask it what my job is, what my current project is, how many kids I have, what car I drive etc. It'll know the answer already.

I think it's this conversation history - or maybe better yet if we think of it as this "relationship" - that people are saying is going to make it hard to move.


I ask for code snippets, occasional recipes, translations... I don't have memory enabled. I start a new chat for each question. At times I ask things in different languages, if the question is tied to culture or location. If I notice I asked the wrong question, I start a new session instead of continuing the old one, so it doesn't try to merge the questions somehow.

I don't see any benefit in it knowing anything about me. Instead I'm usually quite vague to avoid biased answers.


This is not the case.

I use OpenAI a lot on the paid plan via the UI. It now knows absolutely loads about me and seems to have a massive amount of cross conversational memory. It's really getting very close to what you'd expect from a human conversation in this regard.

Sure the model itself is still stateless, and if you use the API then what you say is true.

But they are doing so much unseen summarisation and longer context building behind the scenes in the webapp, what you see in the current conversation history is just a fraction of what is getting sent to the model.


> It now knows absolutely loads about me

Baffled that someone tech literate would be boasting about this in the year 2026. I mean, you do you, we all have different priorities and threat vectors, but this is the furthest from what I would personally want.


It's not boasting, I'm not sure why what I wrote would come across that way. I'm describing how I use a product and the functionality it presents to me.

But yes, it's an emerging area and I am questioning if I am sharing too much with it. I 100% would not want my chat histories exposed.

Saying that though, facebook can read my highly personal messages, google every email, my phone is tracking my every move, I have to sign up for random janky websites for my kids school where ther medical info is stored, etc.

LLM chat history presents a new risk and a different set of data, but it's a crowded minefield already.


This is the same as when Google got big (and Facebook, etc...). We have some privacy focused competitors (Kagi, etc...) but most people are quite happy to just give Google (and worse, Facebook) everything.

AI is just a new technology but this has been ramping up for decades now.


Others getting nostalgia over the Xbox 360 reminds me how old I am!


Now for an additional kickback: nostalgia induced here is about the NXE, but it famously displaced the original Blades dashboard.


I loved the Blades dashboard. Something about idly pressing the shoulder buttons to flip through the blades while talking to my friend with that goofy wireless "Xbox communicator" on my ear.


Blades was better than the redesign


Yeah it was. I hated the NXE so much. It was both harder to use and slower than the original UI. It looked prettier but that was it.


Best Xbox console. It had pretty good games. Sad they were unable to keep that momentum going and are basically nope’ing from the console business altogether now.


Late night uno sessions were a lot of fun. Not everyone had a camera so voice chat was "off the chain" as they used to say


I just bought this the other day, https://www.retro-gamer.de/shop/heft/retro-gamer-2-26-einzel...

The Xbox 360 is now considered a retro gaming device, that was such a reminder how old I am now, to note my first home computer was a Timex 2068.


I was able pull together a Halo 3 LAN party last year, although the "consoles" were Linux PCs and the game was the MCC edition (60fps instead 30). Split-screen was resurrected via mods. I bought some Microsoft gamepad receiver to bring Xbox 360 original controllers under Linux. Some people insisted they get to play on the original gamepad (otherwise it was a mixed bag of PlayStation and newer Xbox/PC controllers). I also realized that Halo 3 itself would have been old enough to drink with us!


The Xbox 360 is about as old now as the NES was when the Xbox 360 came out.


Yeah, and that is why some of us feel rather old. :)

I still remember when all that Nintendo had were Game & Watch handhelds, before NES came to be.

https://en.wikipedia.org/wiki/List_of_Game_%26_Watch_games


> my first home computer was a Timex 2068.

I don't know if the Altair 8800 would count as my first home computer, as I was too young to really understand what it was and mostly just liked to play with the paper tape feed on the Teletype attached to it. By the time we got the PET 2001, I was old enough to actually use it as intended.


I still have it in a box with all its games

I still love its controller design.


I was playing about with Chat GPT the other day, uploading screen shots of sheet music and asking it to convert it to ABC notation so I could make a midi file of it.

The results seemed impressive until I noticed some of the "Thinking" statements in the UI.

One made it apparent the model / agent / whatever had read the title from the screenshot and was off searching for existing ABC transcripts of the piece Ode to Joy.

So the whole thing was far less impressive after that, it wasn't reading the score anymore, just reading the title and using the internet to answer my query.


Yes I have found that grok for example actually suddenly becomes quite sane when you tell it to stop querying the internet And just rethink the conversation data and answer the question.

It's weird, it's like many agents are now in a phase of constantly getting more information and never just thinking with what they've got.


but isn't it what we wanted? we complained so much that LLM uses deprecated or outdated apis instead of current version because they relied so much on what they remembered


To be clear, what I mean is that grok will query 30 pages and then answer your question vaguely or wrongly and then ask for clarification of what it meant and then it goes and requeries everything again ... I can imagine why it might need to revisit pages etc and it might be a UI thing but it still feels like until you yell at it to stop searching for answers to summarise it doesn't activate it's "think with what you got" mode.

I guess we could call this gathering and then do your best conditional on what you found right now.


2010's: Google Search is making humans who constantly rely on it dumber

2020's: LLMs are making humans who constantly rely on them dumber

2026: Google Search is making LLMs who constantly rely on it dumber


Touché, that is what we humans are doing to some degree as well.


Sounds pretty human like! Always searching for a shortcut


It sounds like it's lying and making stuff up, something everybody seems to be okay with when using LLMs.


I am not sure why...you want the LLM to solve problems not come up with answers itself. It's allowed to use tools, precisely because it tends to make stuff up. In general, only if you're benchmarking LLMs you care about whether the LLM itself provided the answer or it used a tool. If you ask it to convert the notation of sheet music it might use a tool, and it's probably the right decision.


The shortcut is fine if it's a bog standard canonical arrangement of the piece. If it's a custom jazz rendition you composed with an odd key changes and and shifting time signatures, taking that shortcut is not going to yield the intended result. It's choosing the wrong tool to help which makes it unreliable for this task.


For structured outputs like that wouldn’t it be better to get the LLM to create a script to repeatably make the translation?


I went to Lidl UKs first walk out shop a few weeks ago. You get the bill and receipts about 40 minutes after you've left.

It certainly felt like it could have been sent off to a lower paid country for a human to tot up.

Also consider you're in the store for what, 10 mins - that's a lot of video processing presumably using state of the art CV models. It's quite possibly cheaper to pay a human than rent the H100 to do it.


I don't get this kind of indignation against anything shell related.


I often favour low maintenance and over head solutions. Most recently I made a stupidly large static website with over 50k items (i.e. pages).

I think a lot of people would have used a database at this point, but the site didn't need to be updated once built so serving a load of static files via S3 makes ongoing maintenance very low.

Also feel a slight sense of superiority when I see colleagues write a load of pandas scripts to generate some basic summary stats Vs my usual throw away approach based around awk.


It's because phones speakers aren't loud enough to be audible over the sound of the tube itself!

It is noticeable on buses and overground when people play things out load, but to be honest quite rare in the grand scheme of things.


That's true. I made several complaints about that to TFL before capitulating and just settling for noise-cancelling headphones.

Never been happier.

The clincher was noticing that the drivers themselves had access to ear defenders ... TFL said that that's because they're down there for extended periods of time. Sounds reasonable but I'm not buying that as a way out of not fixing the issue and exposing my ears to the worst bits of the tube.

Also has the ancillary benefits of blocking out those rare times (for me) when people do have their phone on speaker or are having a chat I'm uninterested in.


Welcome to the UK where citizens are so apathetic they don't care about aging infrastructure or government money being siphoned away.


I'm not really a big gamer but was looking into buying an xbox again. I already had a controller and thought why not try xbox cloud gaming on my Samsung TV.

With a decent internet connection I now struggle to see why anyone would want to buy a hardware Xbox. Games on the cloud version load instantly, play brilliantly and cost the same as the usual Game Pass as far as I can tell. The catalogue seems smaller maybe but aside from that I see little downside.

I could see it working well for PCs too - as long as the terminal device is seamless. I guess us devs have been renting computers in "the cloud" for decades anyway.


> I could see it working well for PCs too

I moonlight in film restoration. One 2hr movie out of our scanner is easily 16 TiB or more depending on the settings we scanned with.

Getting this uploaded to a remote server would take ~39hr over a fully-saturated 1Gbe pipe.


Clearly one use case where it wouldn't work.

On the other hand I'm a software engineer and my incredibly powerful MacBook could be not much more than a fancy dumb terminal - to be honest it almost is already.

If I can play a very responsive multiplayer game of the latest call of duty on my $300 TV with a little arm chip in it, then I could well imagine doing my job on a cloud Mac if the terminal device looked and felt like a MacBook but had the same tiny CPU my TV has.

Not sure if I'd choose it as a personal device but for corporations it seems a no brainer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: