2 Comments
User's avatar
Daniel Popescu / ⧉ Pluralisk's avatar

Love this take, the hallucination issue is a huge blocker for LLMs. Do you think we’ll ever truely get past this ‘probabilistic’ hurdle for good?

Jeff Mohl's avatar

I think it will always be a risk you need to consider and can't be completely eliminated, but that we'll continue to get better at mitigating it until it becomes an issue you only need to worry about in the most high stakes situations. The progress on this has been way faster than I expected over the last few years, especially in things that are easy to check like writing code.