Saw this post. clicked on pricing. "Run up to 20,000 requests for free", ok lets try it. sign up for an account. click on playground. try a query -> balance is insufficient. then I clicked on "pricing" tab inside of the dashboard (https://platform.parallel.ai/pricing), no mention of any free requests.
I pay for a lot of tools, but patterns like this leave me with a really bad impression.
+ apps that make it hard to cancel on a self serve basis
I tried cancelling Exa (search) and they had me email the support, which they ignored and required a follow up. Then they had the nerves to ask for feedback.
> Traditional search engines were built for humans. They rank URLs, assuming someone will click through and navigate to a page. The search engine's job ends at the link. The system optimizes for keywords searches, click-through rates, and page layouts designed for browsing - done in milliseconds and as cheaply as possible.
> ... AI search has to solve a different problem: what tokens should go in an agent's context window to help it complete the task? We’re not ranking URLs for humans to click— we’re optimizing context and tokens for models to reason over.
I also want a search engine which ranks the results based on how it's useful to reason about, not how it can sell potential ads by invoking false rage or insecurities. And it would be better if unrelated information or fancy gimmicks are removed from the website like Reader View.
I like Parallel and been using it for tests but I am not sure about the terms.
> The materials displayed or performed or available on or through our website, including, but not limited to, text, graphics, data, articles, photos, images, illustrations and so forth (all of the foregoing, the “Content”) are protected by copyright and/or other intellectual property laws. You promise to abide by all copyright notices, trademark rules, information, and restrictions contained in any Content you access through our website, and you won’t use, copy, reproduce, modify, translate, publish, broadcast, transmit, distribute, perform, upload, display, license, sell, commercialize or otherwise exploit for any purpose any Content not owned by you, (i) without the prior consent of the owner of that Content or (ii) in a way that violates someone else’s (including Parallel's) rights.
IANAL but think this is to remind you that fragments of text it returns to you after pulling them from various sites in response to your query are protected by whatever copyright notices might be found on those websites. Seems reasonable to me.
As an aside, because they used the chart legend and the data point the exact same text and icon (just a dot), at first, I thought the accuracy was 0% since I had scrolled half-way through and it took me a good few seconds to see the 47% on the top after scrolling up again. Please always use different illustrations for the legend and the actual datapoint.
Search accuracy, when used in the context of an agent, is so important because when you are delivered search results which are incorrect, the agent tends to interpret them as fact because they come from a "credible" source. So, this is very much an industry that still has plenty of room for improvement, and I'm excited to see how this product performs.
Interesting, but I'm not totally convinced that searching for LLMs is different than for us (humans). In the end, we both want to get information that's relevant to our query (intent). Besides, I wonder whether there will be able to convince big players like OpenAI to use them, instead of Google Search with its proven record :)
the need for more web search indices is indeed dire given landscape with agents and providers turning into walled gardens means that independent ones are definitely going to be needed, but just seems insurmountable when building actual index is so costly. Maybe just purely pareto efficient of serving 80% of requests or something is good enough.
The latency of 5s for the basic tier search request is very confusing to me. Is that 5s per request or 5s per 1k requests? If it is indeed 5s per request that seems like a deal breaker
This is a search agent available in the cloud. The site mentions that they doesn't optimize for being "done in milliseconds and as cheaply as possible", and that they do a lot more work like extracting relevant paragraphs and "Single-call resolution for complex queries that normally require multiple search hops" and more. Geared to be consumed by other agents, hence the latency may be tolerable. They have the advantage of running the agent code close to the index so less expensive searches. Basically, this is something in between a simple google search and a "deep research" or at least "thinking" LLM call.
it's pretty interesting how there is a toggle to switch between "human" and "machine" styles for the website, the latter being the same site with the same information, but displayed using a markdown format.
I've been saying for quite some time now that AI is going to kill the traditional (free) search engine. This is just another nail in the coffin.
When an AI searches google.com for you, the ads never get shown to the user. Search engines like kagi.com are the future. You'll give the AI your Kagi API key and that'll be it. You won't even need cloud-based AI for that kind of thing! Tiny, local models trained for performing searches on behalf of the user will do it instead.
Soon your OS will regularly pull down AI model updates just like it pulls down software updates today. Every-day users will have dozens of models that are specialized for all sorts of tasks—like searching the Internet. They won't even know what they're for or what they do. Just like your average Linux user doesn't know what the `polkit` or `avahi-daemon` services do.
My hope: This will (eventually) put pressure on hardware manufacturers to include more VRAM in regular PCs/consumer GPUs.
I fully agree, except that I think this will still be a very “power user” thing. Perhaps this is also what you mean because you reference Linux. But traditional search will be very important for a very long while, imo
> AI is going to kill the traditional (free) search engine
Yes, this has been issue for for many content creators. I predict that because of this, a lot of internet will get behind a paywall. I run one, so I hope the future is bright, but overall this is very bad for the internet because it was never intended to be used this way. Sure, it will be great for users to save unimaginable amount of time searching manually, but if websites lose traffic, well...that is the end of the internet as we know it.
davidsainez|3 months ago
I pay for a lot of tools, but patterns like this leave me with a really bad impression.
nextworddev|3 months ago
I tried cancelling Exa (search) and they had me email the support, which they ignored and required a follow up. Then they had the nerves to ask for feedback.
kshelat|3 months ago
hamasho|3 months ago
keeganpoppen|3 months ago
srameshc|3 months ago
> The materials displayed or performed or available on or through our website, including, but not limited to, text, graphics, data, articles, photos, images, illustrations and so forth (all of the foregoing, the “Content”) are protected by copyright and/or other intellectual property laws. You promise to abide by all copyright notices, trademark rules, information, and restrictions contained in any Content you access through our website, and you won’t use, copy, reproduce, modify, translate, publish, broadcast, transmit, distribute, perform, upload, display, license, sell, commercialize or otherwise exploit for any purpose any Content not owned by you, (i) without the prior consent of the owner of that Content or (ii) in a way that violates someone else’s (including Parallel's) rights.
pegasus|3 months ago
neya|3 months ago
https://ibb.co/fVb4MVLF
tcdent|3 months ago
barapa|3 months ago
BinaryIgor|3 months ago
paragagrawal|3 months ago
namegulf|3 months ago
The major difference is the how the data is structured for consumption.
rishabhparikh|3 months ago
kshelat|3 months ago
bfeynman|3 months ago
paragagrawal|3 months ago
unknown|3 months ago
[deleted]
nahnahno|3 months ago
ripped_britches|3 months ago
tacoooooooo|3 months ago
It does have a relatively large context window, and ime is very good at format adherence
ddp26|3 months ago
I agree there is a need for such APIs. Using Google or Bing isn't enough, and Exa and Brave haven't clearly solved this yet.
aabhay|3 months ago
pegasus|3 months ago
hartator|3 months ago
kanodiaayush|3 months ago
amnigos|3 months ago
Jotalea|3 months ago
gm678|3 months ago
riskable|3 months ago
When an AI searches google.com for you, the ads never get shown to the user. Search engines like kagi.com are the future. You'll give the AI your Kagi API key and that'll be it. You won't even need cloud-based AI for that kind of thing! Tiny, local models trained for performing searches on behalf of the user will do it instead.
Soon your OS will regularly pull down AI model updates just like it pulls down software updates today. Every-day users will have dozens of models that are specialized for all sorts of tasks—like searching the Internet. They won't even know what they're for or what they do. Just like your average Linux user doesn't know what the `polkit` or `avahi-daemon` services do.
My hope: This will (eventually) put pressure on hardware manufacturers to include more VRAM in regular PCs/consumer GPUs.
purplecats|3 months ago
if you say it for long enough, i'm sure you will be right!
stephantul|3 months ago
lukaslevert|3 months ago
gethly|3 months ago
Yes, this has been issue for for many content creators. I predict that because of this, a lot of internet will get behind a paywall. I run one, so I hope the future is bright, but overall this is very bad for the internet because it was never intended to be used this way. Sure, it will be great for users to save unimaginable amount of time searching manually, but if websites lose traffic, well...that is the end of the internet as we know it.
bakigul|3 months ago
FridgeSeal|3 months ago
I get that everyone wants to piggyback on the common-ness of words, but it'd be a lot cooler if they _didn't_.
apsurd|3 months ago
Obligatory: information-dense format is valuable for humans too! But the entire Internet is propped up by ads so seems we can't have nice things.
NetOpWibby|3 months ago