Question and Answer

What are guardrails (in tools like ChatGPT)?

Guardrails are built-in safeguards and content moderation techniques. They aim to filter out harmful, biased, or inappropriate content. Developers implement them to ensure that the AI model adheres to ethical and safety standards when responding. 

Tools like ChatGPT rely on user feedback to improve, so if you find harmful or incorrect content in a response, you can click the “thumbs down” icon and provide more information. OpenAI will review these responses and add or adjust guardrails if needed.

Learn more

Related FAQs

    Frequently Asked Questions