top | item 45837425

The Parallel Search API

123 points| lukaslevert | 3 months ago |parallel.ai

52 comments

order

davidsainez|3 months ago

Saw this post. clicked on pricing. "Run up to 20,000 requests for free", ok lets try it. sign up for an account. click on playground. try a query -> balance is insufficient. then I clicked on "pricing" tab inside of the dashboard (https://platform.parallel.ai/pricing), no mention of any free requests.

I pay for a lot of tools, but patterns like this leave me with a really bad impression.

nextworddev|3 months ago

+ apps that make it hard to cancel on a self serve basis

I tried cancelling Exa (search) and they had me email the support, which they ignored and required a follow up. Then they had the nerves to ask for feedback.

kshelat|3 months ago

hi-this is unexpected. new users should receive free credits. we're looking into the bug. happy to help if you can email support@parallel.ai

hamasho|3 months ago

  > Traditional search engines were built for humans. They rank URLs, assuming someone will click through and navigate to a page. The search engine's job ends at the link. The system optimizes for keywords searches, click-through rates, and page layouts designed for browsing - done in milliseconds and as cheaply as possible.
  > ... AI search has to solve a different problem: what tokens should go in an agent's context window to help it complete the task? We’re not ranking URLs for humans to click— we’re optimizing context and tokens for models to reason over.
I also want a search engine which ranks the results based on how it's useful to reason about, not how it can sell potential ads by invoking false rage or insecurities. And it would be better if unrelated information or fancy gimmicks are removed from the website like Reader View.

keeganpoppen|3 months ago

it’s funny because this is literally how Google USED to work. sigh.

srameshc|3 months ago

I like Parallel and been using it for tests but I am not sure about the terms.

> The materials displayed or performed or available on or through our website, including, but not limited to, text, graphics, data, articles, photos, images, illustrations and so forth (all of the foregoing, the “Content”) are protected by copyright and/or other intellectual property laws. You promise to abide by all copyright notices, trademark rules, information, and restrictions contained in any Content you access through our website, and you won’t use, copy, reproduce, modify, translate, publish, broadcast, transmit, distribute, perform, upload, display, license, sell, commercialize or otherwise exploit for any purpose any Content not owned by you, (i) without the prior consent of the owner of that Content or (ii) in a way that violates someone else’s (including Parallel's) rights.

pegasus|3 months ago

IANAL but think this is to remind you that fragments of text it returns to you after pulling them from various sites in response to your query are protected by whatever copyright notices might be found on those websites. Seems reasonable to me.

neya|3 months ago

As an aside, because they used the chart legend and the data point the exact same text and icon (just a dot), at first, I thought the accuracy was 0% since I had scrolled half-way through and it took me a good few seconds to see the 47% on the top after scrolling up again. Please always use different illustrations for the legend and the actual datapoint.

https://ibb.co/fVb4MVLF

tcdent|3 months ago

Search accuracy, when used in the context of an agent, is so important because when you are delivered search results which are incorrect, the agent tends to interpret them as fact because they come from a "credible" source. So, this is very much an industry that still has plenty of room for improvement, and I'm excited to see how this product performs.

barapa|3 months ago

I don't really understand this. You can and should tell the llm the source of the search results.

BinaryIgor|3 months ago

Interesting, but I'm not totally convinced that searching for LLMs is different than for us (humans). In the end, we both want to get information that's relevant to our query (intent). Besides, I wonder whether there will be able to convince big players like OpenAI to use them, instead of Google Search with its proven record :)

namegulf|3 months ago

You're right, at the end the final end user is human.

The major difference is the how the data is structured for consumption.

bfeynman|3 months ago

the need for more web search indices is indeed dire given landscape with agents and providers turning into walled gardens means that independent ones are definitely going to be needed, but just seems insurmountable when building actual index is so costly. Maybe just purely pareto efficient of serving 80% of requests or something is good enough.

nahnahno|3 months ago

The fact that GPT-4.1 was the judge does not convince of the validity of the bench.

ripped_britches|3 months ago

It’s probably just that they started before gpt 5 was released. It’s a good judge.

tacoooooooo|3 months ago

it's an odd choice. I'd be curious why they picked that. it's not the cheapest, most expensive, best, or worst.

It does have a relatively large context window, and ime is very good at format adherence

ddp26|3 months ago

Hi Parag, congrats on the launch. We'll try this out at FutureSearch.

I agree there is a need for such APIs. Using Google or Bing isn't enough, and Exa and Brave haven't clearly solved this yet.

aabhay|3 months ago

The latency of 5s for the basic tier search request is very confusing to me. Is that 5s per request or 5s per 1k requests? If it is indeed 5s per request that seems like a deal breaker

pegasus|3 months ago

This is a search agent available in the cloud. The site mentions that they doesn't optimize for being "done in milliseconds and as cheaply as possible", and that they do a lot more work like extracting relevant paragraphs and "Single-call resolution for complex queries that normally require multiple search hops" and more. Geared to be consumed by other agents, hence the latency may be tolerable. They have the advantage of running the agent code close to the index so less expensive searches. Basically, this is something in between a simple google search and a "deep research" or at least "thinking" LLM call.

hartator|3 months ago

Congrats on the launch!

kanodiaayush|3 months ago

I'm really excited to try out your deep research apis, the benchmark results look really interesting and the pricing is compelling.

amnigos|3 months ago

Congrats Parag and team on the launch, I am impressed by the quality and latency of Parallel search APIs.

Jotalea|3 months ago

it's pretty interesting how there is a toggle to switch between "human" and "machine" styles for the website, the latter being the same site with the same information, but displayed using a markdown format.

gm678|3 months ago

Same pricing as Google search APIs, for what it's worth

riskable|3 months ago

I've been saying for quite some time now that AI is going to kill the traditional (free) search engine. This is just another nail in the coffin.

When an AI searches google.com for you, the ads never get shown to the user. Search engines like kagi.com are the future. You'll give the AI your Kagi API key and that'll be it. You won't even need cloud-based AI for that kind of thing! Tiny, local models trained for performing searches on behalf of the user will do it instead.

Soon your OS will regularly pull down AI model updates just like it pulls down software updates today. Every-day users will have dozens of models that are specialized for all sorts of tasks—like searching the Internet. They won't even know what they're for or what they do. Just like your average Linux user doesn't know what the `polkit` or `avahi-daemon` services do.

My hope: This will (eventually) put pressure on hardware manufacturers to include more VRAM in regular PCs/consumer GPUs.

purplecats|3 months ago

> I've been saying for quite some time now that AI is going to kill the traditional (free) search engine

if you say it for long enough, i'm sure you will be right!

stephantul|3 months ago

I fully agree, except that I think this will still be a very “power user” thing. Perhaps this is also what you mean because you reference Linux. But traditional search will be very important for a very long while, imo

lukaslevert|3 months ago

There are very broad consequences for a world that no longer accesses the web primarily through Google Search. We're building for that too!

gethly|3 months ago

> AI is going to kill the traditional (free) search engine

Yes, this has been issue for for many content creators. I predict that because of this, a lot of internet will get behind a paywall. I run one, so I hope the future is bright, but overall this is very bad for the internet because it was never intended to be used this way. Sure, it will be great for users to save unimaginable amount of time searching manually, but if websites lose traffic, well...that is the end of the internet as we know it.

bakigul|3 months ago

Human and machine choose looks really good

FridgeSeal|3 months ago

Oh look, another company choosing to use <extremely generic, non differentiating term> as their company name.

I get that everyone wants to piggyback on the common-ness of words, but it'd be a lot cooler if they _didn't_.

apsurd|3 months ago

Human | AI toggle is cool.

Obligatory: information-dense format is valuable for humans too! But the entire Internet is propped up by ads so seems we can't have nice things.

NetOpWibby|3 months ago

I was pleasantly surprised by this toggle too, very neat.