philip lelyveld The world of entertainment technology

23Aug/24Off

Inconsistent Safeguards in AI Chatbots Can Lead to Health Disinformation

A study published earlier this year in BMJ evaluated how well large language models (LLMs) could prevent users from prompting chatbots to create health disinformation. It found that while some AI chatbots consistently avoided creating false information, other models frequently created false health claims, especially when prompted with ambiguous or complex health scenarios. In addition, the study found that the safeguards were inconsistent – some models provided accurate information in one instance but not in others under similar conditions. The researchers criticized the lack of transparency from AI developers, who often did not disclose the specific measures they had taken to mitigate these challenges.

Source: Menz, B. D., Kuderer, N. M., Bacchi, S., Modi, N. D., Chin-Yee, B., Hu, T., ... & Hopkins, A. M. (2024). Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross-sectional analysis. BMJ, 384.

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.