Bittensor and Nous Research are making waves with their latest collaboration. At the heart of this initiative is the Leaderboard Subnet by Nous Research, under Bittensor’s umbrella, designed to foster a competitive yet collaborative environment for AI developers. This platform uses the capabilities of the Cortex subnet to generate fresh, synthetic data, providing a robust benchmark for evaluating the performance of AI models. This approach not only encourages developers to refine their models but also establishes a transparent metric for tracking progress and innovation within the AI community.
Why Nous is connecting with Cortex.t?
Cortex.t is a dual-purpose platform designed to significantly enhance the capabilities of AI developers by providing both a development environment and a source of high-quality, synthetic data. It stands out for its ability to deliver reliable text and image responses via API, leveraging the decentralized Bittensor network. This platform facilitates the generation of synthetic prompt-response pairs using advanced AI models, creating a comprehensive dataset for training. The innovation lies in its method of recycling model outputs to generate new data, which aids in the development of efficient AI models that can perform as well as their larger predecessors. Cortex.t is particularly focused on democratizing access to high-end AI technology, encouraging innovation, and allowing for the customization of AI models to meet specific needs.
For developers, this means access to a rich source of data for fine-tuning models, ensuring they perform well in various contexts.
The Leaderboard by Nous Research
The primary goal of the Nous Leaderboard Subnet is to create a dynamic and competitive environment where AI models can be submitted, evaluated and ranked based on their performance. It uses a stream of newly generated synthetic data from the Cortex subnet to test these models, ensuring that the evaluation process is both fair and rigorous.
Link to the Leaderboard: https://huggingface.co/spaces/NousResearch/finetuning_subnet_leaderboard
How does it work?
Developers submit their finetuned AI models to the Leaderboard Subnet, where each model is then subjected to a series of evaluations. These evaluations are conducted by validators within the Bittensor network, who use, as we said the synthetic data generated by the Cortex as a benchmark. This ensures that every model is assessed against fresh and challenging datasets, reflecting a wide range of real-world scenarios and complexities.