People are revealing sensitive personal information to A.I. chatbots — including plans to commit violent acts.
UTSA: ~20% of AI-suggested packages don't exist. Slopsquatting could let attackers slip malicious libs into projects.