Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FTC investigating ChatGPT over potential consumer harm (npr.org)
138 points by cratermoon on July 13, 2023 | hide | past | favorite | 138 comments


Sam Altman himself sat in front of the top levels of government and freaked them out by telling them his company was putting in place the groundwork to possibly end humanity. A really strange thing to do, I remain puzzled as to why any CEO would stir up the government like this.

Having panicked regulators with science fiction, I hope he's not surprised when they take action.


I think something that's increasingly clear is that OpenAI's only moat is time. Every other company, and even completely independent models, are rapidly catching up. And this is happening at the same time that OpenAI is already making claims that they're hitting diminishing returns on model size [1], as happens in literally every single neural net based field.

If Sam's Gambit succeeded, OpenAI could have potentially been granted a near absolute monopoly with the legislative reach of the US government working to imperil competitors through a Gordian knot of rules and regulations which OpenAI could have been the primary creator of, perhaps even as the head of some sort private-public 'Artificial Intelligence Accountability and Trust Division.' It really just gives one that happy feeling of bureaucrazy mixed with dystopia.

[1] - https://www.wired.com/story/openai-ceo-sam-altman-the-age-of...


> If Sam's Gambit succeeded, OpenAI could have potentially been granted a near absolute monopoly

I don't think this was ever realistic. (I don't think Altman is that politically naive.) Instead, it was marketing. It's saying you've got something so powerful it's dangerous and should be treated carefully. While everyone's debating how to treat it carefully, they concede in public that it's powerful.


>Every other company, and even completely independent models, are rapidly catching up.

I see the HN headlines like "new model approaches GPT4 in benchmarks!" but they turn out to suck when I try them. What can I use today that comes close to GPT4? Certainly not Bard.

I generally run 2 tests to gauge:

1. Write me a Python script that blah blah blah

2. Write me the lyrics to a song about blah blah blah

Only GPT4 has given me impressive results. For #2 the other models manage to rhyme (except most local models), but the rhymes on other models are really basic compared to GPT4's.


> Only GPT4 has given me impressive results

OpenAI is durable only if GPT is middlingly differentiated. If it's uncompetitive, they lose. If it's vital, they face regulation and state-sponsored competition. (I personally advocate for a Heavy Press Program [1] for training American and allied AIs.)

[1] https://en.wikipedia.org/wiki/Heavy_Press_Program


there is a related moat, lack of AI chips. I guess you could say with time more chips will be built.


I think he just wanted to enact restrictions on further training of data models (maybe) and make it so that no one could compete with his company, can't see another explanation.


It's easy to assume that people like Altman are playing 12-d chess here, but it's possible that he's just a run of the mill eccentric billionaire.

Maybe he really believes that he's the only person alive who can safely usher in a new AI godhead, and he feels this should be self-evident to everybody once he explains it to them.


this is not really 12-d chess. He likely got advice from his largest investor on how DC works. Regulation forms a nice moat for first movers and existing players.

If you consider the leaked Google Engineer memo about open source models being a threat, regulation and LLM registration would be the strategy to construct a moat.


It was a gambit to try to secure OAI's moat. It failed, and while competitors are catching up (still a while to go yet), he put a target on OAI's back.


This conspiracy theory is myopic. Altman has been cognizant[1][2] of the very-not-scifi danger of machine intelligence since before OpenAI was even founded. I believe he has dangerous levels of hubris but I don't see him being profit motivated. From his statements in the past and his regrets today[3] of bringing the current tech into existence, I'm confident his motivator is simply the desire to not die.

>WHY YOU SHOULD FEAR MACHINE INTELLIGENCE

>Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared.

- Sam Altman, February 25, 2015

[1] https://blog.samaltman.com/machine-intelligence-part-1

[2] https://blog.samaltman.com/machine-intelligence-part-2

[3] https://www.businessinsider.com/openai-ceo-sam-altman-says-h...


Lots of retrospectively-good predictions in there, but this was the one I undervalued the extent of:

> We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place

We've done this so much recently that I'm now seeing rewritten definitions of "real" intelligence that most humans do not meet.


There's quite a rationale justification for the ever 'shifting goalposts.' When humans describe some milestone in the future as finally being "real AI", they're not really just describing that milestone, but the many adjacent capabilities that they expect that milestone to entail. But we're quite clever, and like any old metric needing to be juked, we invariably find ways to achieve the milestone while sidestepping all the capabilities it's supposed to entail.

Chess is the obvious example. A machine capable of playing chess at a human level was supposed to indicate the advent of genuine artificial intelligence at one time. It wasn't that playing chess well means one is intelligent, but rather it was assumed it'd entail abstract planning, strategic thought, intuition, and creativity. Of course now we have software which can crush even a world champion, but none of those adjacent capabilities emerged at all.

And so I think it's also increasingly obvious that this is the same thing with chatbots. Many of us thought those 'surrounding capabilities' were finally here, even more so with OpenAI regularly demonstrating exceptional competence on a wide array of distinct metrics, such as performance on the Bar exam. But once you use the system for a while it becomes clear that its knowledge base is absolutely and unbelievably immense, but its 'understanding' of that knowledge is literally zero. It will arbitrarily create e.g. API calls that do not exist, mix up utterly simple concepts, and fail to learn from its mistakes in any meaningful way whatsoever.

I'm sure if you've used ChatGPT for anything you've run into the utterly annoying scenario of:

- "How do I [x]?"

- "Sure! That's easy, just do [A]."

- "No, you're hallucinating."

- "Oh sorry, thanks. You're right you actually need to do [B]!"

- "No, you're still hallucinating."

- "Oh sorry, you're right. You just need to do [A].

If a human, even a stupid human, acted in this way - you'd assume they were trolling you, especially one gifted with the ability for infinite perfect and complete recall.


Have you used GPT-4 significantly? I've had many experiences* of answers that, as far as I can tell, require strong reasoning skills and a world model. It is unreliable, as you say, but only demonstrating those skills a small fraction of the time would still be proof of their existence.

*: (such as when GPT-4 describes what would be output by Python code that I write and feed it, if it was run, despite GPT-4 not having any access to a Python interpreter and therefore having to simulate what one would do with my code, and despite my code not being in its training set.)

> especially one gifted with the ability for infinite perfect and complete recall.

You know that LLMs don't have this, right? There is no database they have access to containing their training data. They just have the weights that were optimized in response to seeing that training data.


What makes you think there isn't a limited interpreter, or an expert system effectively akin to an interpreter working behind the scenes? I think this is a fairly easy concept to test. Give it 'code' that is trivial to understand, but in a [hopefully] novel syntax that might also fuzz up guess-the-next-word. I just gave ChatGPT a prompt of:

------

"I'm working with a new computer language. What would be the output of:

IsTrue is not true

If IsTrue is true then print IsTrue

If IsTrue is not true then print IsNotTrue"

------

It hemmed and hawed, and accurately describe what the program would do, which is basically just repeating back the last two lines to me. But refused to tell me what the output would be. When I demanded it tell me what the value of "IsTrue" would be, so I could figure out the output, I got:

"I apologize for the confusion, but as an AI language model, I don't have access to the specific values of variables in your code or the ability to execute code directly. In the given context, the value of IsTrue is not specified, so I cannot determine its exact value. It could be either true or a value that is not true, depending on how the variable is defined or assigned in your code."

I then gave it the exact same program in C#, and it unsurprisingly gave me not only a far more meaningful description of what the code does, but also the exact output - immediately.

https://en.wikipedia.org/wiki/Mechanical_Turk


> What makes you think there isn't a limited interpreter

Because OpenAI says there isn't, and there are open source LLMs which show the same (but less profound) ability.

> expert system effectively akin to an interpreter working behind the scenes?

You're describing reasoning again. No-one hardcoded an "expert system" for Python into ChatGPT. If it has become one, it is through reading about Python (and same for C#).


Then what would be your hypothesis on why ChatGPT is incapable of reasonining about the most utterly of trivial pseudo code examples that even the worst developer in the world would instantly understand, yet can parse and output complete and sophisticated programs in certain other languages? Another bit of evidence is how ChatGPT will habitually mixes things up (as is the very nature of LLM systems as currently programmed), yet what you will never see happen, as in 0% of the time, is it randomly mixing code in a chat response. Again, this is extremely trivially explained by the fact that code generation and chat generation are distinct systems, but otherwise...?

Also, expert system [1] needs not be in quotes. It's an 'AI' technology dating back to the 60s. It's essentially just a fancy term for hard-coding queryable domain specific knowledge into a system. As an aside, where has OpenAI claimed any of this is false? Or, for that matter, which open source LLMs produce code that's not completely buggered?

[1] - https://en.wikipedia.org/wiki/Expert_system


Sorry, quick clarification: I asked if you've been using GPT-4, have you? I agree that GPT-3 / "normal ChatGPT" does not have the abilities I'm talking about.


Are you using the GPT-4 API?


> We've done this so much recently that I'm now seeing rewritten definitions of "real" intelligence that most humans do not meet.

Love this sentiment, stealing it =)


:) I heard someone say "LLMs are just lossy compression algorithms" dismissively yesterday, and I would love to hear their understanding of what a brain does.


> I believe he has dangerous levels of hubris but I don't see him being profit motivated

Altman peddles crypto. Keep that in mind when giving him the benefit of doubt on integrity and profit motivation.


Which coin?


Why not both? I think true AGI super intelligence (a lot smarter than us) does have the potential to destroy humanity, and as the current leader, Open AI seems closest to achieving that. Of course superintelligence might be far away or just unachievable, but we don't know that. And I agree Altman's statement also builds OAI's moat, but that doesn't make his statement false.


But isn’t it closer in the same way that I’m closer to Japan than my dog who is sitting 1 meter to the east of me is? We are still both 10,000 km away


This technology (transformer language models) was invented about 6 years ago. For it to be far away means that at some point the exponential has to stop and we have an AI winter. That’s possible but doesn’t look likely at this point.

It’s more likely that we will see superintelligence in our lifetime. And if the rate of progress does not slow it will be sooner rather than later.

My current estimate is parity by 2030 and superintelligence by 2035. Evidence from specialist AIs, e.g. Go, indicates that super AIs tend to occur soon after parity is reached. E.g. AlphaGo (parity) March 2016; Master (super) December 2016.


For it to be far away means that at some point the exponential has to stop

Not if we are on the wrong path, going in the wrong direction, which I think was at least partly the point of the comment to which you are responding.

There is a school of thought that something fundamental is being missed by modern AI and the (amazing) success of GPT3+ ironically risks directing us further down that wrong path at an accelerating pace.


I'm pretty sure AGI will not be one model.

But a collection of models, with some kind of vector databases(or something more efficient than that), being orchestrated by one or multiple master models.

A large LLM that knows pretty much everything there is to know about the world, seems like a good building block for an AGI, no matter how the rest is built.

LLM's could be like the Broca's and Wernicke's area of an AGI brain. working in unison with dozens of other parts.


Or LLMs aren’t involved at all. After all LLMs work at the level of word tokens … how fundamental really are words to AGI level intelligence; the human brain doesn’t operate on the level of words, words are something we picked up, that a more fundamental mechanism at play deals with.


Humanity wasn't doing anything significant before words.


Words help communicate ideas between people, allowing society to advance greatly, but it isn’t necessary for thinking.


Kids workout how to manipulate parents well before they can speak?


Intelectual parity to an average human by 2030? Would you be willing to bet on this?


I don’t know much about AI but this:

> That’s possible but doesn’t look likely at this point.

Why doesn’t it seem likely?

We’ve been trying to do great things with AI for decades. We don’t seem to have an excellent grasp on why certain things work well or if the strategies we’re using can ultimately yield much better intelligence.

My impression is that we really don’t have a lot of control over progress and we could very likely hit walls and stall for many years without meaningful progress. What am I missing?


I can think of two situations that might lead to an AI winter:

1. We are wildly underestimating the computation requirements.

2. There are theoretical roadblocks coming up such that even a very large number of smart people being paid to solve the problem won’t find a key sequence of ideas. Think Riemann Hypothesis, or Fermat’s Last Theorem, etc.

The counter-argument to (1) is that available computational resources are very high given the billions of dollars available. The one system we know to possess human-parity intelligence (the human brain) uses 12 watts and is not exactly a data center.

The counter-argument to (2) is that we’ve made faster than expected progress since the discovery of transformers, and we seem to be quite close already given the capabilities of GPT-4. Of course you don’t know that you’ve hit a roadblock until you hit it, but so far it’s been smooth sailing.


This seems reasonable since we have intelligence in 2023 that can pass both the U.S. bar exam and the MKSAP. Yann LeCunn posted a powerpoint last summer about the path to AGI, and a model for achieving it. Given the pace of progress, 2035 seems reasonable.


Right, I can get closer to the moon than you by climbing the right tree in the right part of the world at the right time. But that doesn't mean I'm any closer to inventing rocketships.


I can't imagine that LLMs are anything other than one tiny component of a true AGI. Maybe/probably necessary, but altogether insufficient by themselves. Even if one could spawn true AGI, it's sort of like the tornado tearing through the junkyard and accidentally assembling the 747. Just altogether improbable.

It even seems unlikely that eve after having invented/improved the LLM, that the other components are anywhere close to being developed.

If his statement were honest and not strategic, then he's utterly irrational and quite possibly self-destructive (even genocidal). One simply wouldn't continue in that line of research if one believes there was the risk of an omnicidal AGI being spawned. It's like something out of a bad Lovecraft story... if you think the spell will summon Cthulhu, the best thing to do is just stop.


Yep. Especially with geohot leaks and torrent activity at an all time high - OpenAI looks cooked


Torrent activity?


What are the geohot leaks?


I assume comments like these, "GPT-4: 8 x 220B experts trained with different data/task distributions and 16-iter inference."

https://twitter.com/soumithchintala/status/16712671501017210... https://archive.li/rfFlW

I'm not sure the most canonical paper on mixture of experts but here's one possible:

https://arxiv.org/pdf/1701.06538.pdf


I think when ppl refer to MoE they are referring generally to the Google GLaM paper actually


https://the-decoder.com/gpt-4-architecture-datasets-costs-an...

Not op, but this is where a cheeky google got me.


"The idea is nearly 30 years old and has been used for large language models before, such as Google's Switch Transformer."

Innovation! :)


George Hotz (pseudo geohot) in his recent interview with Lex Fridman gave some info on the probable structure of gpt4.


Maybe he's just a CEO doing the ethically responsible thing and not the purely self-serving thing for the sake of shareholders? (though as humans, the shareholders have some stake in humanity too)


Wow that's naïve (or sarcasm).

I did an internship with ElementAI a while back. They had a full lobbying division to "come up with a legislative framework with the government for ethical AI". This is quite literally just a business strategy where you get to write the rules because lawmakers see you as "their guys" while challengers are seen as reckless.

It was already a thing in 2019 and it definitely is still a thing. Sam Altman just made the gambit that congress would see him as the honest and prudent guy. Seems like that just didn't work.


Not sarcasm! Though I agree this also helps build Open AI's moat against competitors... but that doesn't make Altman's statement not true. It's still self-serving, but I think if he was totally willing to lie he'd have made other self-serving statements, and not this particularly high-risk, shocking, and also true (IMHO) self-serving statement.


I can't imagine a valid ethical justification for not releasing the model training details which they withheld. I'd speculate that they did it only to slow their competitors, but I don't have to because "the competitive landscape" was the reason given right there in their paper.

There is (IMHO) no good reason to let OpenAI argue simultaneously for "this technology is too dangerous to be replicated" and for "... but if you have a credit card then you can use it right now".


Here here


The fact that we are bewildered instead of cherishing, and punishing instead of rewarding shows us what incentives lie in honesty nowadays. Sounds nihilistic but Sam could have done better for his company, by being untruthful


> Maybe he's just a CEO doing the ethically responsible thing

But he's not. As Scott Galloway put it, he's raising a gun to grandma's head and screaming "stop me before I shoot her!" He's saying ethically-responsible things. But simultaneously acting in contrast to his words.

Altman is doing an excellent job as a CEO. But that doesn't make him an angel. Just look at Worldcoin, his crypto project, and tell me you see the marks of a humanitarian.


There's a simple explanation: he believes what he said.


If he believed what he said, why would he put the thing on the internet with plugins? Why would he be helping to secure funding to build even more powerful models?

If he believed it was truly dangerous, I don't think he'd be working for a company that is advertising they are actually pursuing super-intelligence ?

Not saying you're wrong, but there is definitely some very noticeable dissonance in his messaging and in his actions.


I imagine he thinks OpenAI can get superintelligence right, and that someone else would get it wrong.

You're right that there's some dissonance there, however. Especially since three of the leading labs (OpenAI, Anthropic, and DeepMind) were _all_ founded on the premise that they had to be the ones to get superintelligence right because the risks were so high.


He's said in his interview on the Lex Fridman podcast, that he wants to get AI in front of people as fast as possible to give us the longest amount of time possible to start to adapt to it.

I think he wants to scare people a little in a somewhat controlled way to onramp us to this new reality as fast as possible.


I'm not trying to be over cynical here, but what someone says and what someone does are entirely different things.


We want to be the ones who win the race to agi. Not somebody like Russia


It’s also simple that he needed moat


I think there's something to the arguments that regulation will benefit OpenAI.

However, they shouldn't be overstated. OpenAI has had the obviously best LLMs for over 4 years now. They aren't a flash in the pan.


> I remain puzzled as to why any CEO would stir up the government like this.

I have a belief, as a ML researcher, that there are two types of x-risk AI/ML researchers. True believers and hype-men. People do hype the x-risk because it is catchy and gets more people talking. The whole "no news is bad news" strategy, which even worked in recent elections. I think there's two dangers to these people: 1) they eventually turn into true believers (you say something enough, you start to believe it), and 2) it distracts from the risks of the current dangers. For Sam, I think he is a true believer and I'd expect this out of any CEO who spends significant amounts of time hyping people up to gather capital by promising a future AGI.

As for myself, I'm not buying the x-risk arguments. There's a lot I have to say and a lot of nuance, but to put it briefly I'll reference something Mitchel said. She noted how it seems rather unlikely that a super-intelligence, who outperforms us in every single way is also unable to understand intention behind instructions (meaning it doesn't understand us, so a contradiction in super-intelligence). The danger is handing things over to a model without human supervision and letting it hallucinate. So the danger is thinking it is smarter than it actually is, and then trusting it. Concentrating on ineffable abilities of super-intelligences distracts us from this danger, which already exists.


>A really strange thing to do, I remain puzzled as to why any CEO would stir up the government like this.

Because it's a very serious possibility(that the singularity could end humanity), and a significant part of the people who are serious about AI are extremely worried about alignment, for good reasons.

> I hope he's not surprised when they take action.

Surprised? Do you mean relieved?


> I remain puzzled as to why any CEO would stir up the government like this.

Because 1) the government isn't going to do anything and 2) investors will think OMG this must be profitable $$$


Marketing. OpenAI needs to create illusion that their chatbot is more than just gradual evolution of past models that still needs a lot of work.


I don't know if SA is an Effective Altruist, but it feels a bit like he got caught up in their weird logic of "AI will be an extinction-level threat to humanity - and the only way to address the threat is somehow to build exactly that AI".


You can take Roko's Basilisk out of the community forum but you can't take Roko's Basilisk out of the community zeitgeist.


> I remain puzzled as to why any CEO would stir up the government like this

Because he sold his soul to the Devil (M$) and is now pursuing regulatory capture in order to build a "Open"AI-controlled moat where no competitors can survive.


He wanted heavy regulations on AI which would have benefited his moat


He wants regulatory capture obviously


> with science fiction

Because he doesn't believe it's Science Fiction, you are projecting.


That is completely unrelated to the FTC complaint.


Government is a many headed hydra.

You can't poke a stick at one head and think the others won't bite you.

Altman effectively said to every level of government "OpenAI is extremely dangerous, you'd better look out for us".

All the wisdom of Bilbo Baggins reaching into the trolls pocket.


Well that’s one way to solve AI alignment. Just sue the companies for harming consumers.

I’m half joking, but in the statistical distribution of good, bad and ugly things automation can do, we will need to draw some line as to what is legally actionable. For example, if I Google search recipes for explosives, Google isn’t liable for surfacing accurate information. And imo OpenAI isn’t liable if their chatbot gives the same accurate info in response to the same query in the form of a chat message rather than a link.


One of the differences is Google isn't originally writing that text themselves, they're pointing to someone else who has written it. In a potential lawsuit Google would be able to solidly point to the domain and ownership of the actual author of the text. If (when) OpenAI ends up in court, who will they point to? If someone was slandered, the courts will find a human being responsible for it.


What about the responsibility of the person writing the prompt?


I guess it depends on the prompt. But if someone just asks “what crimes has X famous person committed?” and ChatGPT spits out some false information, the libel would be OpenAI’s fault, no?


Sam Altman has put a target on every AI company/researchers back with his constant fearmongering when he has spoken directly to the government.

In his selfish efforts to build the ultimate moat around his company, Altman's attempt to get government regulation to stifle future competition might backfire.

Whilst I'd normally say it would be satisfying to see Sam fail due to his selfishness unfortunately this laser focus from the gov might bring the rest of the industry down with him.


I hope he succeeds, and stifles AI development. AI is too advanced for humans to use properly and it probably will end humanity. If the rest of the industry is brought down, I for one will be happy.


You are talking as if USA regulations apply to the whole world.


This argument goes back and forwards forever, but if AI does indeed turn out to be harmful or a risk to humanity, I don't think other countries will be just immediately eager to deploy it, do you?

People aren't stupid. Humans generally want to live and hope their children flourish and not be destroyed by some type of horrific event. Even...c..cc.communists.

If enough top researchers agree of the dangers, and switch from advancement to mitigation, international bodies co-operate. Things would probably be ok.


…see: nuclear weapons


Yeah, what about them?

Sounds like you're placing all your hope in apathy, good luck with that.


> what about them?

Are nuclear weapons not "harmful or a risk to humanity"?


I think it's an over simplification an unwise comparison.

Everyone keeps bringing up Russia and China...they like nuclear weapons because they give them power. AI can take it away, especially if a super intelligent AI was to come into being.

Have you ever noticed how dictators like to be in control? If they actually are aware there is little chance they'd be able to control these things, they're much less likely to want them to exist.

It's just not the same type of problem. The only thing we have in common here is they can both be dangerous and cause a lot of harm, but so can fire.

This same line of reasoning is why Hinton thinks we might be able to slow things down with an international agreement, because no one wants to see things spiral into chaos. No one wins that, maybe not even the AIs.


> only thing we have in common here is they can both be dangerous and cause a lot of harm, but so can fire

I think the previous comment's point was this.

You said "if AI does indeed turn out to be harmful...[you] don't think other countries will be...eager to deploy it" [1]. The comment responding to you said there is no precedent for that. Before you get super-intelligent humanity-destroying AI you may get merely-intelligent enemy-destroying AI. If the second evolves into the first, and we don't know when, everyone is incentivized to develop the second out of military necessity.

[1] https://news.ycombinator.com/item?id=36720159


I don't think this is true, if international co-operation can happen, not everyone will try to build these systems.

My point is, everyone sits on the internet and screams but Russia, but China, but no one really knows what they think about the situation because no serious discussions have begun.

I did see an interview with Geoffrey Hinton who says that he can see Chinese researchers are concerned, he reads it in mailing lists. In my opinion that is a good start because again, I don't think the CCP is stupid and they also have a lot of influence in the world.

My point is, trying to do nothing and maintaining the status quo won't end well either.

I think most people have just given up, climate change now this, it's too much for people.


> if international co-operation can happen

That’s the point. For a system such as this, it cannot. That’s just reality.

It gives advantage to have it, particularly when others don’t. Its development happens in silence, making enforcement uncertain. And you don’t want to get into a shooting match without it when your enemy does.


So just sit back and hope for the best?


> So just sit back and hope for the best?

There are options between giving up and daydreaming.

Consider: a public registry of who is working on what with which data. (Good opportunity to address the copyright question.) Working group to advise, without pomp, if there is a threat and what it might look like.

We don't have an intelligent discussion because everyone speaking has a conflict of interest or no idea what they're talking about. Maybe making public more data, hard data, will help.


That sounds like something, good.


I think that other countries will happily explore the AI topic further, independent from the US. Regarding stupidity and hope, see how well we're doing with climate change, or environmental regulations.


I think instead of being pessimistic about the inevitability of disasters, we should try and prevent them. We're doing better than nothing about climate change, and we can keep trying to do more even though the damage is already great.


I think these are separate concerns. On one hand, yes, we should try to do what we can. On the other, when talking about things like AI development, we shouldn't be as naive to expect that everyone will play nice.


I too am surprised by the apathy angle on this forum.

"Nothing can be done, oh well..."

I guess this is when humans do actually lose their humanity and go extinct, if we just give up in the face of challenges, we're done.


Nobody can do anything to stop AI progress, we are too interested in competing with each other to cooperate against AI.


As we see, it already works


Has google ever been investigated over surfaced search results that could cause harm?


It should be investigated.


And this would achieve what goal exactly? Even more search result sanitizing? Google scrubbing everything even remotely controversial from their search results because it might "harm" someone? No thank you, I value access to information. The last thing we need is more of "As a search engine I cannot...".


ChatGPT is a privacy nightmare. When you disable "Chat history & training", this setting is stored client side in local storage. That means the setting gets lost when switching browser or deleting your browser cache.

Also, when you disable "Chat history & training", plugins are not available to you as a paid user.

I believe Sam Altman doesn't care about his users' privacy at all. I lost trust in him completely, when he first mentioned that he doesn't understand why decentralized Finance might be useful to people. Some months later, when Silicon Valley bank was in the process of shutting down, he started crying. At the same time he supports Worldcoin.


The plug-ins part makes total sense to me. I would reason that plug-ins can save your chat history and use them for their own data, so if you want to keep your privacy totally, plug-ins would ve disabled as well.


And what about Code interpreter?


Could be they offload the computing to something like REPL or another third-party on the backend, or?


This is such a great time to be a lawyer. Just play any AI like a slot machine until it gives you incorrect results and then sue. Probability is a bitch.


But dont they have clauses like. Our models sometimes hallucinate, give out wrong answers. Basically use at your own risk and cannot sue us if it hoes wrong


Exceptions exist, that's why we have courts, to figure it out.

You can't put up a sign on a rollercoaster that says it's going to kill customers if they ride and then get away scot-free when someone dies. Though it took a court to say that, I'm sure.

Same thing here. openai is saying things, and then a court is going to have to decide if that's acceptable.


Anything a LLM generates should be verified if it is going to be used in a high stakes decision. Not knowing how LLMs work won't be a good defence. Can't blame ugly handwriting on the pen.


There is a big difference between someone being killed and a chat bot that says some mean or inaccurate things.


I wish the FTC/SEC would break up Facebook, Google, and Apple. ChatGPT is small fish.


[dupe]

Some more discussion over here: https://news.ycombinator.com/item?id=36717420


> The agency says it's looking into whether the AI tool has harmed people by generating incorrect information about them

Havent heard this complaint before. And I wouldn’t really call this harmful to consumers. If anything its harmful to the subject, but consumers in general. But even so it doesnt meet the definition of libel. How harmful is it really?

Of all the complaints against OpenAI, absolutely none of them seem to be what I think really matters. And that is OpenAI tweaking it to be politically correct and geared to wherever they fall on the overton window. Its a tool. It should be sharp. And we should expect more than of its users leat we want dumber ones.


It could absolutely cause real harm to any one person or group at a time. There was a recent case where it accused a professor of sexual harassment, seemingly out of nowhere. Imagine it spreading on Twitter and getting mobbed, doxxed, etc. Or a potential employer seeing it and silently rejecting your employment.

Excuse the link, the big papers have awful popovers, this one seemed acceptable...

https://decrypt.co/125712/chatgpt-wrongly-accuses-law-profes...


At the bottom of the ChatGPT prompt it always says: "ChatGPT may produce inaccurate information about people, places, or facts."

So... is this just a case of user error? Or maybe the FTC needs to require a larger font size or just better disclosure? Mostly joking, but not entirely since I do think it's too small. Here's a proposal: https://i.imgur.com/IGON20r.png


The person being slandered never agreed to that disclaimer. If I manufacture and sell a car with a disclaimer "sometimes the brakes fail" and it runs over and paralyzes someone, the injured person and the judge are not going to care ten cents about whatever disclaimer the purchaser signed. If anything it will be used as evidence that the manufacturer knew they were selling a dangerous faulty product.


Well unlike the brakes situation, the ChatGPT lies are non-physically harmful misinformation and they're only sent directly to the user who presumably should be aware of the ChatGPT limitations. If I share ChatGPT lies as truth, that seems no different than me just sharing lies I made up. And it seems like in this case no one even believed the ChatGPT lies. I'm sympathetic because I'm sure it's incredibly hurtful seeing those specific lies, but no one was actually convinced.

We all already treat the internet at large as a source of "usually correct but not always" information, and people just need to take the ChatGPT disclaimer to heart and not treat it like some oracle. I imagine articles like these will fade over time as society finds lies coming from the "makes-up-lies-sometimes-machine" (ChatGPT) as boring and as common place as the many websites full of lies we already have.


The FTC is currently opening an investigation. Multiple nations are launching inquiries into these systems. If you think this is a problem that isn't going to become much bigger, you are in for a rude surprise.


Real world example of how things can go very, very badly: :(

https://en.wikipedia.org/wiki/Indian_WhatsApp_lynchings


We as a population need to be exposed to misinformation to build up immunity against it. The alternative is for those in power to "decide" for us what is true, which is even worse.

I don't trust AI and probably never will. I consider ChatGPT to be merely a PC version of 4chan's infamous /b/ board, which carries the disclaimer "The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact."


> We as a population need to be exposed to misinformation to build up immunity against it.

We tried this in the 1930s in Europe and it didn’t seem to go very well. See also it’s lasting legacy in the current Italian prime minister, Marine Le Pen’s popularity in France, and AfD’s electoral successes in Germany.


> We tried this in the 1930s in Europe and it didn’t seem to go very well.

I agree, a bureaucracy banning books and reference material that ran counter to the approved narrative was quite disastrous. Hopefully we don’t repeat that mistake now that everyone is neatly siloed into digital echo chambers.

Personally, I think we did pretty well against the (specifically online) deluge of misinformation and fringe ideas for about a decade or so prior to ~2010. Unfortunately, the “problem” of internet free speech seems to have been solved and now we’re just waiting to see whether the censors or the reactionaries win and what type of fresh authoritarian hell awaits on the other side.


> now we’re just waiting to see whether the censors or the reactionaries win

If you think these are the two camps, I recommend googling the words “Republican” and “book bans.”


There's a pretty big difference in wanting sexually explicit material out of classrooms, and outright book bans...

I'd argue neither side has a stellar record today, but it's the other that's more speech stifling...


Acknowledging queer people exist or even that sex itself exists is not in anyway explicit. This is just American bigotry and Puritanism run amok.


Am I correct in my understanding that your comment is equating Nazi Germany and current popular right-wing politicians?


To paraphrase Mike Godwin himself, "Go for it, by all means. Fair game."

The Beer Hall Putsch of 1923 was about as effective as the January 16 assault, and just as comically incompetent. But there was a lot of sympathy for the far-right terrorists among the German judiciary, just as there is today in the US. The prosecutors and judges who could have stopped the movement in its tracks instead chose to take it easy on the perpetrators, just like today.

Now we get to wait ten years to see where, or if, these two convergent timelines diverge.


Marine Le Pen is the daughter of an actual Nazi sympathizer and her politics arose out of those circles. One of the AfD’s leaders just got charged with using Nazi slogans. Spare me the fascist version of the “I’m shocked to find gambling is going on here” spiel. It’s beyond tedious. I’ll just copy and paste Sartre on anti-Semitism in response.


Quacks like a duck, etc.

NSDAP were in power for over a decade before the first concentration camp was ever built. Their rhetoric in the 30s was not as direct and violent as their later actions, that doesn't mean they weren't still Nazis.


>NSDAP were in power for over a decade before the first concentration camp was ever built.

The first concentration camp, Nohra, opened in 1933, the year the Nazis took power. The more well known Dachau was also opened that year.


I think GP was mixing up concentration camps (Konzentrationslager), an umbrella term for both work/prison camps (never really secret, killing prisoners was a side effect but not the ostensible purpose, initially targeted German citizens with leftish politics who were vocal against the new regime) with extermination camps (subtype of concentration camp, not publicized, engineered for mass murder, where most of the Jews and a lot of the Roma/Sinti in Nazi-occupied Europe still there by 1942 were sent)

Dachau was a concentration camp located in a Munich suburb that a large number of prisoners survived (but a lot did not) and were even sometimes (but not often) released from. Really bad, but nothing that awful governments hadn’t done before - in fact, was being done by our eventual allies to the East at the same time.

Auschwitz was an extermination camp (Vernichtungslager), located in a distant corner of Poland that only a tiny fraction of people sent there survived. These camps and system were a new horror for the world, and the center of the Nuremberg war crimes trials. Those were planned at the beginning of 1941 (Wannsee Conference), so yes, put into operation about a decade after the Nazis came to power.

Some places that were initially set up as work (to death) camps were later turned into outright extermination camps.


I can only give you an antidote of its dangers in a different category.

My significant other, an otherwise very intelligent women, will ask ChatGPT health questions. She knows it might be wrong, but does it anyways to debug her health. I try to point out that even getting suggested a bad diagnosis is very dangerous. The advice it gives has way less nuanced than say the healthMD which has it owns flaws. And unlike coding questions, you can't assume health advice is right until you prove it wrong.


It's always good to cross verify with a tool such as Isabel.

Luckily, ChatGPT and Isabel almost always agree in my tests.

https://symptomchecker.isabelhealthcare.com/


Chatgpt lies all the time with health questions, citing nonsense and making up explanations and diagnoses. I’m scared that anybody trusts it


> How harmful is it really?

You seem to assume it's negative information about people. If it decides to say that some quack hocking raw almonds as cancer cures is FDA approved it's extremely harmful.

Hopefully it's not that bad, but there's clearly a large inbetween here and I wouldn't be surprised if it's over the line.

That's before you even get into defamation cases.


Levine's Law: Every bad thing a public company does is securities fraud. (and also bank fraud, because companies borrow money from banks)


Why do I always get downvoted for saying this? Like it's not true?


[flagged]


Posting links to threads that have comments can be quite helpful, but half of those didn't get noticed at all and have zero comments, so it feels more like you're trying to make a point rather than help people find related conversations.


[flagged]


It's not against the guidelines to post something that has been previously posted (a small number of reposts are okay if it has yet to generate significant attention), and it is frequently the case that a topic doesn't catch on until the 3rd or 4th submission. You just came to the thread that did take off and are trying to say that it never should have been submitted because there were a whole bunch of duds that happened to get here first.

If you don't want to participate in the discussion that started here in this thread, feel free to find another thread—there's no shortage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: