What the Dev?

274: AI testing AI? A look at CriticGPT

In this episode, we speak with Rob Whiteley, CEO of Coder, about OpenAI's recent announcement of CriticGPT, a new AI model that provides critiques of ChatGPT responses in order to help the humans training GPT models better evaluate outputs during reinforcement learning from human feedback (RLFH). According to OpenAI, CriticGPT isn't perfect, but it does help trainers catch more problems than they do on their own.

Key talking points include:

  • The downsides of having AI testing the quality of other AI models
  • Why it's important to be specific about what types of errors the model is allowed to look for
  • Is this another example of rushing into AI?