Good evidence confuses ChatGPT when used for health information, study finds

A world-first study has found that when asked a health-related question, the more evidence that is given to ChatGPT, the less reliable it becomes—reducing the accuracy of its responses to as low as 28%.
Facebook Comments