Glossary / Red-Teaming Red-Teaming Deliberately probing an AI system to find safety, security, or bias weaknesses before it is deployed.