(no title)
jelambs | 1 year ago
I was an early engineer at Plaid and I think it's an interesting parallel, financial data aggregators used to use more of a screenscraping model of integration but over the past 5+ years, it's moved almost fully to OAuth integrations. would expect the adoption curve here to be much steeper than that, banks are notoriously slow so would expect tech companies to move even more quickly towards OAuth and APIs for agents.
another dimension of this, is that it's quite easy to block ai agents screenscraping, we're able to identify with almost 100% accuracy open ai's operator, anthropic's computer use api, browswerbase, etc. so some sites might choose to block agents from screenscraping and require the API path.
all of this is still early too, so excited to see how things develop!
bboygravity|1 year ago
I've tried making a Firefox extension that fills webforms using an LLM and the things website makers come up with the break their own forms for both humans and agents are just insane.
There are probably over a 1000 different ways to ask for someone's address that an agent (and/or human) would struggle to understand. Just to name an example.
I think agents will be able to get through them easily, but NOT because the websites makers are going to do a better job at being easier to use.
danielbln|1 year ago
sethhochberg|1 year ago
Tools in this space rely a lot on human use of a computer being much slower, less precise, and more variable than machine use of a computer.