<BACK TO CHANGELOG

Browser Use Model - BU 2.0

+12% accuracy. Same speed. No tradeoffs.

BU 2.0 benchmark comparison

BU 2.0 is here.

  • +12% accuracy over BU 1.0 (74.7% → 83.3%)
  • Similar speed — ~62s average task duration
  • Matches Claude Opus 4.5 accuracy while being 40% faster

Benchmark Results

ModelAccuracyAvg Task Duration
BU 2.083.3%62s
BU 1.074.7%58s
Claude Opus 4.582.3%104s
Gemini 3 Pro81.7%143s
GPT-5.270.9%196s

Pricing

ModelInputCachedOutput
bu-1-0 / bu-latest$0.20/1M$0.02/1M$2.00/1M
bu-2-0$0.60/1M$0.06/1M$3.50/1M

Quick Start

import asyncio
from browser_use import Agent
from browser_use.llm import ChatBrowserUse
 
async def main():
    # Use the new bu-2-0 model
    llm = ChatBrowserUse(model="bu-2-0")
 
    agent = Agent(
        task="Your task here",
        llm=llm
    )
 
    result = await agent.run()
    return result
 
asyncio.run(main())

Get your API key at cloud.browser-use.com.

├─

Recent Updates