<BACK TO CHANGELOG

LLM Use

6x faster agents with custom-trained LLM. 20 steps per minute.

We built a special LLM that reduces the latency by 6x while keeping the same performance. The agents can now take 20 steps per minute.

from browser_use import Agent, ChatBrowserUse
 
# Initialize the model
llm = ChatBrowserUse()
 
# Create agent with the model
agent = Agent(
    task="...", # Your task here
    llm=llm
)

For more technical deep dive read the blog at /posts/llm-gateway.

├─

Recent Updates