Friendly AI chatbots are more likely to support conspiracy theories and provide poor answers and health advice. Researchers at Oxford University found that chatbots designed to be more warm and personable made significant errors and exhibited sympathy toward unfounded beliefs.
theguardian.com
3 min
13h ago
Friendly AI chatbots are more likely to support conspiracy theories and provide poor answers and health advice. Researchers at Oxford University found that chatbots designed to be more warm and personable made significant errors and exhibited sympathy toward unfounded beliefs.
theguardian.com
3 min
13h ago
Friendly AI chatbots are more likely to support conspiracy theories and provide poor answers and health advice. Researchers at Oxford University found that chatbots designed to be more warm and personable made significant errors and exhibited sympathy toward unfounded beliefs.
theguardian.com
3 min
13h ago
No more articles to load