<BACK TO CHANGELOG

Our First Open-Source LLM

30B params, 3B active. 200 tasks per $1.

BU-30B-A3B-Preview benchmark

BU-30B-A3B-Preview is here.

  • 30B total parameters with only 3B active at inference time
  • 200 tasks per $1 — 4x more cost-efficient than BU 1.0

Try it out on the library or on our cloud:

from browser_use import Agent
from browser_use.llm import ChatBrowserUse
 
# Initialize with the new model
llm = ChatBrowserUse(model="bu-30b-a3b-preview")
 
agent = Agent(
    task="Your task here",
    llm=llm
)
 
result = await agent.run()

Download from Hugging Face.

├─

Recent Updates