BROWSER USE

Products:
- [Web Agents](https://browser-use.com/web-agents)
- [Stealth Browsers](https://browser-use.com/stealth-browsers)
- [Custom Models](https://browser-use.com/custom-models)
- [Proxies](https://browser-use.com/proxies)

[Pricing](https://browser-use.com/pricing)
[Blog](https://browser-use.com/posts)
[Cloud Docs](https://docs.cloud.browser-use.com)
[Open Source Docs](https://docs.browser-use.com)

[GET STARTED](https://cloud.browser-use.com)
[GITHUB](https://github.com/browser-use/browser-use)

---

# The Fastest Web Agent in the World

**Author:** Gregor Zunic
**Date:** 2025-10-08
> We built a special LLM gateway that reduces the latency by 6x while keeping the same performance. The agents can now take 20 steps per minute. How many can a human do?

---

**Browser Use 0.8.0** introduces `ChatBrowserUse` gateway which reduces the latency like crazy (6x faster than computer use models) while keeping the same performance.
Our agents can now take **20 steps per minute**. How many can a human do?

![LLM Gateway Performance: Accuracy vs Latency](https://browser-use.com/images/llm-gateway-accuracy-latency.png)
BU 1.0 achieves the same accuracy (~65%) as other state-of-the-art computer use models (Claude Sonnet 4, Claude Sonnet 4.5, Gemini 2.5 Computer Use, OpenAI Computer-Using Model) while being **4x faster**.
The latency shown is pure LLM inference time—just the time the model takes to decide what to do next. Our latency is ~68 seconds per trajectory compared to ~225-330 seconds for other models. 
## One API Key. That's It.
All you need is `BROWSER_USE_API_KEY`, which automatically takes care of everything.
```python
"""
1. Get your API key from https://cloud.browser-use.com/dashboard/api
2. Set environment variable: export BROWSER_USE_API_KEY="your-key"
"""
import asyncio
from browser_use import Agent
from browser_use.llm import ChatBrowserUse
async def main():
    agent = Agent(
        task='Find the number of stars of the following repos: browser-use, playwright, stagehand, react, nextjs',
        llm=ChatBrowserUse(),
    )
    # Run the agent
    await agent.run()
```
That's it. One environment variable and you're running at 2x speed.
## Get Started
1. Go to [cloud.browser-use.com](https://cloud.browser-use.com/dashboard/api)
2. Create a `BROWSER_USE_API_KEY`
3. Plug it to your `.env` or to the `ChatBrowserUse`
4. Enjoy insanely fast browser use inference
## Pricing

| Token Type | Price per 1M tokens |
| --- | --- |
| Input tokens | $0.50 |
| Output tokens | $3.00 |
| Cached tokens | $0.10 |

## Enterprise offer
For enterprises that want to speed up their browser use flows by 2x, we offer dedicated infrastructure and priority support. If you're interested in deploying production agents or need help with your use case, reach out to us at [x.com/browser_use](https://x.com/browser_use) or contact us at [enterprise@browser-use.com](mailto:enterprise@browser-use.com).

Update to Browser Use 0.8.0:

```bash
uv add browser-use==0.8.0
```
