RAG? Sure. I even implemented systems using it, and enabling it, myself.
And guess what: RAG doesn't prevent hallucination. It can reduce it, and there are most certainly areas where it is incredibly useful (I should know, because that's what earns my paycheck), but it's useful despite still hallucinations being a thing, not because we solved that problem.
Are you implying that you’re the same person I was commenting to or are you just throwing your opinion into the mix?
Regardless, we’ve seen accuracy of ~98% with simple context-based prompting across every category of generation task. Don’t take my word for it, a simple search would show the effectiveness of “n-shot” prompting. Framing it as “it _can_ reduce” hallucinations is disingenuous at best, there really is no debate about how well it works. We can disagree on whether 98% accuracy is a solution but again I’d assert that for >50% of all possible real world uses for an LLM 98% is acceptable and thus the problem can be colloquially referred to as solved.
If you’re placing the bar at 100% hallucination-free accuracy then I’ve got some bad news to tell you about the accuracy of the floating point operations we run the world on