LBB.AI Toolbox (Link, Build, Beyond)
  • Home
  • Submission
  • Ranking
  • About
    • Home
    • Submission
    • Ranking
    • About

    AI Benchmark

    Total 1 articles 网址
    Sorting
    releaseViews
    MMLU

    MMLU

    MMLU (Massive Multitask Language Understanding) is a benchmark test launched by the University of California, Berkeley in September 2020, aiming to comprehensively evaluate large language models' multitask understanding across 57 different domains.
    880,84595.1K
    Model Evaluation# AI Benchmark# AI Model Assessment# Language Model Evaluation
    No more

    标签云

    LBB.AI Toolbox (Link, Build, Beyond)
    Explore 1,100+ AI Tools on LBB.AI Toolbox (Link, Build, Beyond) – your smart platform for global AI resources. Powered by LBBAI 1.0 intelligent ranking (launched May 2025). Get daily updates on the best tools to automate tasks and stay ahead in the AI revolution.

    Apply for Link Submission About

    wechat QR CodeLBB.AI Toolbox (Link, Build, Beyond)
    wechat QR Code
    Copyright © 2025 LBB.AI (Link, Build, Beyond)   
    网址
    网址文章
    热门搜索
    PPTexcelPhotoRestorationPapervideoHumanPhoto