Show HN: Hyperbrowser MCP Server – Connect AI agents to the web through browsers
63 points| shrisukhani | 11 months ago |github.com
Our MCP server exposes seven tools for data collection and browsing:
1. `scrape_webpage` - Extract formatted (markdown, screenshot etc) content from any webpage
2. `crawl_webpages` - Navigate through multiple linked pages and extract LLM-friendly formatted content
3. `extract_structured_data` - Convert messy HTML into structured JSON
4. `search_with_bing` - Query the web and get results with Bing search
5. `browser_use_agent` - Fast, lightweight browser automation with the Browser Use agent
6. `openai_computer_use_agent` - General-purpose automation using OpenAI’s CUA model
7. `claude_computer_use_agent` - Complex browser tasks using Claude computer use
You can connect the server to Cursor, Windsurf, Claude desktop, and any other MCP clients with this command `npx -y hyperbrowser-mcp` and a Hyperbrowser API key. We're running this on our cloud browser infrastructure that we've been developing for the past few months – it handles captchas, proxies, and stealth browsing automatically.
Some fun things you can do with it: (1) deep research with claude desktop, (2) summarizing the latest HN posts, (3) creating full applications from short gists in Cursor, (3) automating code review in cursor, (4) generating llms.txt for any website with windsurf, (5) ordering sushi from windsurf (admittedly, this is just for fun - probably not actually going to do this myself).
We're building this server in the open and would love feedback from anyone building agents or working with web automation. If you find bugs or have feature requests, please let us know! One big issue with MCPs in general is that the installation UX sucks and auth credentials have to be hardcoded. We don’t have a solution to this right now but Anthropic seems to be working on something here so excited for that to come out. Love to hear any other complaints / thoughts you have about the server itself, Hyperbrowser, or the installation experience.
You can check us out at https://hyperbrowser.ai or check out the source code at https://github.com/hyperbrowserai/mcp
xena|11 months ago
olivia-l|11 months ago
I pointed their scraper at a url on my server to test it's behavior. It made four separate requests to the same page, three with the UA "udici"[2], and one with the UA "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36" and a different IP. The first three requests were all made within the span of 1 second, and the fourth 27 seconds later.
I emailed their published support address asking for an IP range and UA. They gave me the entire IP range of google cloud, and ignored the UA question.
This goes well beyond the "it's up to our users to implement responsible scraping practices" implication from the developer's other comment[3]. Instead, their service behaves maliciously by default, and they have implemented and documented switches that users can toggle for additional malicious scraping behavior. As far as I can tell, it is not even possible to implement a robots.txt-respecting scraper on top of this, because I couldn't find any mechanism for users to set a specific UA string.
[1]: https://docs.hyperbrowser.ai/sessions/advanced-privacy-and-a... (archived: https://web.archive.org/web/20250322045952/https://docs.hype..., http://archive.today/2025.03.22-050029/https://docs.hyperbro...)
[2]: https://github.com/nodejs/undici
[3]: https://news.ycombinator.com/item?id=43442116
soulofmischief|11 months ago
shrisukhani|11 months ago
TheTaytay|11 months ago
1) I looked at the pricing. Is search included in the price - (you just pay credits/browser time?)
2) Can you tell me more about the source of your residential proxies? I am new to this space, so don’t know how people source these legitimately.
Thanks!
shrisukhani|11 months ago
1) Yep, you just pay from browser time and proxy usage
2) We use a handful of proxy providers under the hood ourselves. There’s a lot of shady ones but we only work with ones where we’ve vetted the source of. Different providers source proxies in different ways - directly from ISPs, paying end sources for proxies etc
pizzafeelsright|11 months ago
MCPs are showing promise.
shrisukhani|11 months ago
And yeah MCP is super promising. We announced this on X and LinkedIn yesterday and the response has been really good. A lot of people with a bunch of use cases.
One surprising thing is there’s also a bunch of semi/non-technical people using our MCP server and the installation experience for them rn just absolutely sucks.
I think once auth and 1-click install are solved, MCP could become the standard way to integrate tools with LLMs
fosterfriends|11 months ago
dennisaxu|11 months ago