HomeAbout

TL;DR CNBC


Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts - TL;DR CNBC

Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts

Publishing timestamp: 2023-04-25 11:30:24


Summary

Nvidia has announced new software called NeMo Guardrails that will help software makers prevent AI models from stating incorrect facts, talking about harmful subjects, or opening up security holes. The software is designed to address the "hallucination" issue with the latest generation of large language models, which is a major blocking point for businesses. NeMo Guardrails can force LLM chatbots to talk about specific topics, head off toxic content, and prevent LLM systems from executing harmful commands on a computer. The software is open source and can be used in commercial applications.


Sentiment: NEUTRAL

Tickers: NVDAGOOGLMSFT

Keywords: microsoft corpmobilebreaking news: technologybusinessbusiness newstechnologynvidia corpapple incalphabet incartificial intelligence

Source: https://www.cnbc.com/2023/04/25/nvidia-nemo-guardrails-software-stops-ai-chatbots-from-hallucinating.html


Developed by Leo Phan