Show HN: A web browser agent in your Chrome side panel
github.comHey HN,
I'm excited to share BrowserBee, a privacy-first AI assistant in your browser that allows you to run and automate tasks using your LLM of choice (currently supports Anthropic, OpenAI, Gemini, and Ollama). Short demo here: https://github.com/user-attachments/assets/209c7042-6d54-4fc...
Inspired by projects like Browser Use and Playwright MCP, its main advantage is the browser extension form factor which makes it more convenient for day to day use, especially for less technical users. Its also a bit less cumbersome to use on websites that require you to be logged in, as it attaches to the same browser instance you use (on privacy: the only data that leaves your browser is the communication with the LLM - there is no tracking or data collection of any sort).
Some of its core features are as follows:
- a memory feature which allows users to memorize common and useful pathways, making the next repetition of those tasks faster and cheaper
- real-time token counting and cost tracking (inspired by Cline)
- an approval flow for critical tasks such as posting content or making payments (also inspired by Cline)
- tab management allowing the agent to execute tasks across multiple tabs
- a range of browser tools for navigation, tab management, interactions, etc, which are broadly in line with Playwright MCP
I'm actively developing BrowserBee and would love to hear any thoughts, comments, or feedback.
Feel free to reach out via email: parsa.ghaffari [at] gmail [dot] com
-Parsa
Looks amazing, love it. And I see that in your roadmap the top thing is saving/replaying sessions
Related to that, I'd suggest also adding the ability to "templify" sessions, ie. turn sessions into sort of like email templates, with placeholder tags or something of the like, that either ask the user for input, or can be fed input from somewhere else (like an "email merge")
So for example, if I need to get certain data from 10 different websites, either have the macro/session ask me 10 times for a new website (or until I stop it), or allow me to just feed it a list
Anyway, great work! Oh also, if you want to be truly privacy-first you could add support for local LLMs via ollama
Thank you!
I like that suggestion. Saved prompts seem like an obvious addition, and having templating within them makes sense. I wonder how well would "for each of the following websites do X" prompts work (so have the LLM do the enumeration rather than the client - my intuition is that it won't be as robust because of the long accumulated context)
Edit: forgot to mention it does support Ollama already
Yeah, that "for each" needs to be code instead of prompt. Ideally you want to only use the LLM for the first time you run the task, but after "figuring out the path", you want to run that directly through code
So for the example above, the user might have to do: "do this for this website", then save macro, then create template, then run template with input: [list of 10 websites]
You might be able to reduce the amount of information sent to the LLM by 100 fold if you use a stacking context. Here is an example of one made available on Github (not mine). [0] Moreover, you will be able to parse the DOM or have strategies that parse the DOM. For example, if you are only concerned with video, find all the videos and only send that information. Perhaps parsing a page once finding the structure and caching that so the next time only the required data is used. (I see you are storing tool sequence but I didn't find an example of storing a DOM structure so that requests to subsequent pages are optimized.)
If someone visits my website that I control using your Chrome Extension, I will 100% be able to find a way to drain all their accounts probably in the background without them even knowing. Here are some ideas about how to mitigate that.
The problem with Playwright is that it requires Chrome DevTools Protocol (CDP) which opens massive security problems for a browser that people use for their banking and managing anything that involves credit cards are sensitive accounts. At one point, I took the injected folder out of Playwright and injected it into a Chrome Extension because I thought I needed its tools, however, I quickly abandoned it as it was easy to create workflows from scratch. You get a lot of stuff immediately by using Playwright but likely you will find it will be much lighter and safer to just implement that functionality by yourself.
The only benefit of CDP for normal use is allowing automation of any action in the Chrome Extension that requires trusted events, e.g. play sound, go fullscreen, banking websites what require trusted event to transfer money. I'm my opinion, people just want a large part of the workflow automated and don't mind being prompted to click a button when trusted events are required. Since it doesn't matter what button is clicked you can inject a big button that says continue or what is required after prompting the user. Trusted events are there for a reason.
[0] https://github.com/andreadev-it/stacking-contexts-inspector
I will look into this. Speed and inefficiency due to the low information density of raw DOM tokens is the single biggest issue for this type of thing right now.
Looks awesome. Last couple of months, I've built a similar Chrome Extension, https://overlay.one/en
I also started with with conversational mode and interactive mode, but later removed the interactive mode to keep its features a bit simple.
That looks very cool. Would love to chat if you're open to it
happy to, sent you a message
> Since BrowserBee runs entirely within your browser (with the exception of the LLM), it can safely interact with logged-in websites, like your social media accounts or email, without compromising security or requiring backend infrastructure.
Does it send the content of the website to the LLM?
yes, the LLM can invoke observation tools (e.g. read the text/DOM or take a screenshot) to retrieve the context it needs to take the next action
So maybe something we want to be mindful of before using this on banking, health, etc.
How is it “privacy-first” then if it literally sends all your shit to the LLM?
> How is it “privacy-first” then if it literally sends all your shit to the LLM?
Because it supports Ollama, which runs the LLM entirely locally on your own hardware, thus data sent to it never leaves your machine?
Edit: joshstrange beat me to the same conclusion by mere moments. :)
You can use Ollama as the backend so the data never leaves your computer.
Also, the line is blurry for some people on “privacy” when it comes to LLMs. I think some people, not me, think that if you are talking directly to the LLM provider API then that’s “private” whereas talking to a service that talks to the LLM is not.
And, to be fair, some people use privacy/private/etc language for products that at least have the option of being private (Ollama).
Looks amazing. Would love something like this in Firefox or Zen. Mozilla released Orbit, but it was never something that ended up really being useful.
Firefox already has something similar natively, but it's not enabled by default. If you turn on the new sidebar they have an AI panel, which basically looks like an iframe to the Claude/OAI/Gemini/etc chat interface. Different from Orbit.
That sidebar doesn't have the ability to do any actions on the browser tab, or have the data form the browser as a context in any way. It is just a simple iframe.
If you click the three-dots menu above the iframe, you can select "Show shortcut when selecting text". That allows you to select text and then provide that as context to an AI prompt.
(At least, that's how I understand it - I have the feature turned off myself.)
Thank you! :)
Would love to explore a FF port. Right now, there are a couple of tight Chrome dependencies:
- CDP - mostly abstracted away by Playwright so perhaps not a big lift
- IndexedDB for storing memories and potentially other user data - not sure if there's a FF equivalent
FF supports IndexedDB directly, it has supported it, fully, since version 16 [0].
[0] https://caniuse.com/indexeddb
Thanks! Will track your project for the future. Looks very promising
Looks great! Tried a few examples and models, works very well.
Very nice. I tried with Ollama and it works well.
The biggest issue is having the Ollama models hardcoded to Qwen3 and Llama 3.1. I imagine most Ollama users have their favorites, and probably vary quite a bit. My main model is usually Gemma 3 12B, which does support images.
It would be a nice feature to have a custom config on the Ollama settings page, save those to Chrome storage, and use that in the 'getAvailableModels' method, along with the hardcoded models.
Great suggestion, will add custom Ollama configurations to the next release
What makes it privacy first?
Shouldn't it use local llm then?
Does it send my password to a provider when it signs up to a website for me?
This looks fun, thanks for sharing. Will definitely give it a shot soon.
I read over the repo docs and was amazed at how clean and thorough it all looks. Can you share your development story for this project? How long did it take you to get here? How much did you lean on AI agents to write this?
Also, any plans for monetization? Are you taking donations? :)
Thanks a lot! :)
I might write a short post on the development process, but in short:
- started development during Easter so roughly a month so far
- developed mostly using Cline and Claude 3.7
- inspired and borrowed heavily by Cline, Playwright MCP, and Playwright CRX which had solved a lot of the heavy lifting already - in a sense this project is those 3 glued together
I don't plan to monetize it directly, but I've thought about an opt-in model for contributing useful memories to a central repository that other users might benefit from. My main aim with it is to promote open source AI tools.
Chrome canary already had Gemini Nano built in into the browser for local LLM. For the use cases you mentioned there is no need to call a 3rd party.
In a way this should be a core feature of any browser and if this project accelerates/improves that by 5% I will be very happy!
The fact that Chrome and Gemini are, at least for now, owned by the same company raises huge privacy and consumer choice concerns for me though, and I see benefit in letting the user choose their model, where/how to store their data, etc.
Gemini Nano sounds like a model that only does basic autocomplete or semantic inference, no tool calling for sure. What this kind of product seems to be headed to is something like Manus, which needs agentic (thinking, planing, tool calling) capabilities.
Can it perform DOM manipilation as well, like fill forms or would the LLM response need to be structured for each specific site to use it on? And would an LLM be able to perform such a task?
It can fill forms - the agent can invoke a large number of tools to both observe and interact with a page
I keep getting the "Error: Failed to stream response from [Gemini | OpenAi] API. Please try again." - tried valid new keys from both google/openai
Is that with 2.5 Flash? I got that error intermittently with that mode, but the other Gemini models worked fine. I'll investigate
ah yea 2.0 flash is working. 2.5 doesn't and OpenAi 4.0 and mini models dont work either. The error message should probably say to try other models because i was pretty confused
Looks great. Any plans for this to work in Firefox?
I'll be exploring a FF port. There are a couple of tight Chrome dependencies that need to be rethought (IndexedDB for storage and CDP for most actions)
Indexeddb is not Chrome only
Can this be used to automatically remove the plethora of cookie banners/modals polluting the web?
Yes! Sometimes it does it even without the user asking which is very satisfying :)
Aren't browsers starting to ship with built-in LLMs? I don't know much about this but if so then surely your extension won't need to send queries to LLM APIs?
There's two types of built-in LLM's:
- The ones the user sees (like a sidepanel). These often use LLM API's like OpenAI.
- The browser API ones. These are indeed local, but are often very limited smaller models (for Chrome this is Gemini Nano). Results from these would be lower quality, and of course with large contexts, either impossible or slower than using an API.
Looks like the example video is extremely expensive. It racks up almost 2$ of usage in about a minute.
Good spot. I probably shouldn't have the 2nd most expensive model in the demo!
Some of the cheaper models have very similar performance at a fraction of a cost, or indeed you could use a local model for "free".
The core issue though is that there's just more tokens to process in a web browsing task than many other tasks we commonly use LLMs for, including coding.
Interesting. I can't play with it now since out for grocery run, but can it interact with elements on the page if asked directly?
yes, you can ask it to both observe (e.g. query an element) or interact with (e.g. click on) elements, for example using selectors or a high level reference like the label or the color of a button
I presume that this works by processing the html and feeding to the llm. What approaches did you take for doing this? Or am I wrong?
Under the "tools" part of the README it shows the following observation tools: - browser_snapshot_dom - browser_query - browser_accessible_tree - browser_read_text - browser_screenshot
So most likely the LLM can chose how to "see" the page?
This looks really well done. I particularly like the simple user interface. A lot of the time these things are unnecessarily complex I feel.