meta self taught evaluator

Meta have announced the release of the ‘Self-Taught Evaluator’, a model designed to significantly reduce human input in AI training and assessment.

The tech giant, best known for social media platforms, Facebook and Instagram, has its sights set on the cutting edge of technology. Meta makes significant investments not only in AI but in the wearables space, with a focus on building its Metaverse.

As the AI space develops at breakneck speed, Meta hopes to be leading the way in taking humans, safely, further out of the loop by automating the evaluation process for AI.

By entrusting the evaluation tasks to AI systems, Meta aims to reduce human error, speed up development cycles, and uncover insights that might be missed by human evaluators.

The Self Taught Evaluator is a significant leap forward in automating AI. In this article we’ll explain what Meta’s Self Taught Evaluator is, how it works and how to start using it.

What is Meta’s Self Taught Evaluator and How Does It Work?

Meta’s Self-Taught Evaluator is an AI model specifically designed to evaluate and improve the performance of other AI models.

The Self-Taught Evaluator system generates contrasting model outputs. This means the several AI models perform the same task, the outputs of which are compared to identity inconsistencies. By assessing these differences it can be easier to see where models are having issues or where there may be a flaw that needs to be addressed.

It also trains an LLM-as-a-Judge, this means the language model is tasked with evaluating the outputs of other AI models. It analyzes the reasoning traces generated by different models. Reasoning traces are the explanations for a model's decisions. Through this the LLM-as-a-Judge can determine which model's output is the most accurate.

how to use meta self taught evaluator

It also features an iterative self-improvement scheme, as the name suggests this means that the model is continuously learning and improving itself without human intervention. The initial judgements made by LLM-as-a-Judge are used to retrain the models being tested. After being adjusted the AI models are re-evaluated. This continuous cycle of evaluation, feedback, improvement and finally reevaluation leads to an improvement not only in the models being assent but in the Self-Taught Evaluator’s ability to assess them.

The Self-Taught Evaluator has also been trained with ‘direct preference optimization’, this is a training method that involves directly optimizing the model's preferences, rather than relying on explicit human labels or annotations. Meta states that even without human intervention the model is highly effective at judging the quality of other AI models, as measured by the RewardBench standard. This is a benchmark used to evaluate the performance of generative reward models. A generative reward model is an AI model that can generate numerical scores to assess the quality of other AI models’ outputs.

Meta claims that the Self-Taught Evaluator outperforms bigger models that use human-annotated labels, including their own Llama-3.1-405B-Instruct, as well as GPT-4 and Gemini-Pro.

It has also placed as one best evaluators on AlpacaEval, a benchmark for assessing how well AI models can evaluate other AI models. It has a high "human agreement rate," which means the judgments are the same, or at least very similar to, judgments made by human evaluators.

How to download Meta’s Self-Taught Evaluator?

Meta’s self taught evaluator is almost too good to be true, if you’d like to see it in action it is easy to download and implement.

1. Sign into Hugging Face.

2. Search for Self-taught-evaluator-llama3.1-70B or visit this link.

3. Review the Self-taught Evaluator Research License and Acceptable Use Policy

4. If you are happy with the content of the license and policy, fill out your contact information. You need to agree to share your contact information to access the model

5. Accept the terms and conditions and click the button labeled ‘I Accept Self-Taught Evaluator Research License and AUP’.

6. Your request will then be reviewed by Meta’s repository authors, once approved you will be able to download the Self-Taught Evaluator.