Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts
Publishing timestamp: 2023-04-25 11:30:24
Summary
Nvidia has announced new software called NeMo Guardrails that will help software makers prevent AI models from stating incorrect facts, talking about harmful subjects, or opening up security holes. The software is designed to address the "hallucination" issue with the latest generation of large language models, which is a major blocking point for businesses. NeMo Guardrails can force LLM chatbots to talk about specific topics, head off toxic content, and prevent LLM systems from executing harmful commands on a computer. The software is open source and can be used in commercial applications.
Sentiment: NEUTRAL
Keywords: microsoft corp, mobile, breaking news: technology, business, business news, technology, nvidia corp, apple inc, alphabet inc, artificial intelligence,