That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.
In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.
SO for discussions of taste? I have these two options to build this, how should i approach this?
They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is:
User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.
Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.
> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.
I think the deeper question is how SO would get paid for that.
Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)
Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.
The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.
This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.
Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.
There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha
the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better
Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs
I used a system prompt similar to this, where I just dumped the entirety of https://grugbrain.dev/ into it and prefaced it with the assistant having to emulate grug.
Didn't find it particularly useful, but is is funny!
I actually feel like these integrations are fine, as long as they are opt-in or easily opt-outable of permanently. For now, I don't see the harm in adding another default search engine, it's much less obstrusive than the home page sponsored links. And if it gets them a little more independent from google by siphoning perplexity's seemingly infinite vc investment money, so be it.
I wonder if the rigidity could be improved while staying modular, maybe just use many more screws? I don't mind undoing more than 5 screws for the bottom to come off, make it 20 and it's still totally fine.
IIRC from one of their videos, they mentioned that they deliberately use cast aluminium instead of CNC machined like the macbook. If they sacrifice build quality for sustainability deliberately, I don't see how they could compete with Apple.
What is the implementation difference between using the system WebView (fragmented, especially bad under linux) and using one shared tauri-base runtime that only gets breaking changes updates every 2 years or so so there aren't twenty different ones running at the same time and it ends up like electron?
Would bundling one extended support release of chromium or firefox's backends that are then shared between all tauri apps not suffice?
They mention FSR specifically in the trailer, but this comes with RDNA3, meaning no FSR4 currently. Does this mean that the int8 path for fsr4 is gonna become official to support this and the ps5 pro?
Now to do speculation on top of speculation on top of speculation: Valve's next vr headset deckard / steam frame is also rumored to be using an ARM chip, and with them being quite close with AMD since the steam deck custom APU (although that one was apparently just something originally intended for magic leap before that fell apart), this could be in there + be powerful enough to run standalone VR.
In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.
SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.
Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.
reply