Damn I built a RAG agent during the past 3 months and a half for my internship. And literally everyone in my company was asking me why I wasn't using llangchain or llamaindex like I was a lunatic. Everyone else that built a rag in my company used llangchain, one even went into prod.
I kept telling them that it works well if you have a standard usage case but the second you need to something a little original you have to go through 5 layers of abstraction just to change a minute detail. Furthermore, you won't really understand every step in the process, so if any issue arises or you need to be improve the process you will start back at square 1.
I had a similar experience when LangChain first came out. I spent a good amount of time trying to use it - including making some contributions to add functionality I needed - but ultimately dropped it. It made my head hurt.
Most LLM applications require nothing more than string handling, API calls, loops, and maybe a vector DB if you're doing RAG. You don't need several layers of abstraction and a bucketload of dependencies to manage basic string interpolation, HTTP requests, and for/while loops, especially in Python.
On the prompting side of things, aside from some basic tricks that are trivial to implement (CoT, in-context learning, whatever) prompting is very case-by-case and iterative, and being effective at it primarily relies on understanding how these models work, not cargo-culting the same prompts everyone else is using. LLM applications are not conceptually difficult applications to implement, but they are finicky and tough to corral, and something like LangChain only gets in the way IMO.
Groupthink is really common among programmers, especially when they have no idea what they are talking about.
It shows you don't need a lot of experience to see the emperor has no clothes, but you do need to pay attention.
I admire what the Langchain team has been building toward even if people don’t agree with some of their design choices.
The OpenAI api and others are quite raw, and it’s hard as a developer to resist building abstractions on top of it.
Some people are comparing libraries like Langchain to ORMs in this conversation, but I think maybe the better comparison would be web frameworks. Like, yeah the web/HTML/JSON are “just text” too, but you probably don’t want to reinvent a bunch of string and header parsing libraries every time you spin up a new project.
Coming from the JS ecosystem, I imagine a lot of people would like a lighter weight library like Express that handles the boring parts but doesn’t get in the way.
Matches my experience as well. I tried langchain about a year ago for an app and had a pretty standard use case but even going a little bit of rail and i had to dig up layers of abstractions where it would have been much easier just using the original openai lib. So it might be beneficial if your use case is about offering many different LLM providers in your app but if you know you won't be swapping out the LLM provider soon it's usually better to not use such frameworks.
I ran into similar limitations for relatively simple tasks. For example I wanted access to the token usage metadata in the response. This seems like such an obvious use case. This wasn’t possible at the time, or it wasn’t well documented anyway.
I've had the same experience. I thought I was the weird one, but, my god, LangChain isn't usable beyond demos. It feels like even proper logging is pushing it beyond it's capabilities.
On top of that, if you use the TypeScript version, the abstractions are often... weird. They feel like verbatim ports of the Python implementations. Many things are abstracted in ways that are not very type-safe and you'd design differently with type safety in mind. Some classes feel like they only exist to provide some structure in a language without type safety (Python) and wouldn't really need to exist with structural type checking.
Could someone point me towards a good resource for learning how to build a RAG app without llangchain or llamaindex? It's hard to find good information.
you are heading the right direction. It's amazing to see seasoned engineers go through the mental gymnastic of justifying installing all those dependencies and arguing about vector db choices when the data fit in ram and the swiss knife is right there: np.array
I built my first commercial LLM agent back in October/November last year. As a newcomer to the LLM space, every tutorial and youtube video was about using LangChain. But something about the project had that "bad code" smell about it.
I was fortunate in that the person I was building the project for was able to introduce me to a few other people more experienced with the entire nascent LLM agent field and both of them strongly steered me away from LangChain.
Avoiding going down that minefield ridden path really helped me out early on, and instead I focused more on learning how to build agents "from scratch" more or less. That gave me a much better handle on how to interact with agents and has led me more into learning how to run the various models independently of the API providers and get more productive results.
I've only ever played around with it and not built out an app like you have, but in my experience the second you want to go off script from what the tutorials suggest, it becomes an impossible nightmare of reading source code trying to get a basic thing to work. LangChain is _the_ definition of death by abstraction.
LangChain got its start before LLMs had robust conversational abilities and before the LLM providers had developer decent native APIs (heck, there was basically only OpenAI at that time). It was a bit DOA as a result. Even by last spring, I felt more comfortable just working with the OpenAI API than trying to learn LangChain’s particular way of doing things.
Kudos to the LangChain folks for building what they built. They deserve some recognition for that. But, yes, I don’t think it’s been particularly helpful for quite some time.
I tried to use Langchain a couple times, but every time I did, I kept feeling like there was an incredible amount of abstraction and paradigms that were completely unnecessary for what I was doing.
I ended up calling the model myself and extracting things using a flexible json parser, I ended up doing what I needed with about 80 lines of code.
This is their game. Infiltrate HN, X, YouTube, Google with “tutorials” and “case studies”. Basically re-target engineers until they’ve seen your name again and again. Then, they sell.
Hi HN, Harrison (CEO/co-founder of LangChain) here, wanted to chime in briefly
I appreciate Fabian and the Octomind team sharing their experience in a level-headed and precise way. I don't think this is trying to be click-baity at all which I appreciate. I want to share a bit about how we are thinking about things because I think it aligns with some of the points here (although this may be worth a longer post)
> But frameworks are typically designed for enforcing structure based on well-established patterns of usage - something LLM-powered applications don’t yet have.
I think this is the key point. I agree with their sentiment that frameworks are useful when there are clear patterns. I also agree that it is super early on and super fast moving field.
The initial version of LangChain was pretty high level and absolutely abstracted away too much. We're moving more and more to low level abstractions, while also trying to figure out what some of these high level patterns are.
For moving to lower level abstractions - we're investing a lot in LangGraph (and hearing very good feedback). It's a very low-level, controllable framework for building agentic applications. All nodes/edges are just Python functions, you can use with/without LangChain. It's intended to replace the LangChain AgentExecutor (which as they noted was opaque)
I think there are a few patterns that are emerging, and we're trying to invest heavily there. Generating structured output and tool calling are two of those, and we're trying to standardize our interfaces there
Again, this is probably a longer discussion but I just wanted to share some of the directions we're taking to address some of the valid criticisms here. Happy to answer any questions!
Bigger problem might be using agents in the first place.
We did some testing with agents for content generation (e.g. "authoring" agent, "researcher" agent, "editor" agent) and found that it was easier to just write it as 3 sequential prompts with an explicit control loop.
It's easier to debug, monitor, and control the output flow this way.
But we still use Semantic Kernel[0] because the lowest level abstractions that it provides are still very useful in reducing the code that we have to roll ourselves and also makes some parts of the API very flexible. These are things we'd end up writing ourselves anyways so why not just use the framework primitives instead?
Similarly to this post, I think that the "good" abstractions handle application logic (telemetry, state management, common complexity), and the "bad" abstractions make things abstract away tasks that you really need insight into.
This has been a big part of our philosophy on Burr (https://github.com/dagworks-inc/burr), and basically everything we build -- we never want to tell how people should interact with LLMs, rather solve the common problems. Still learning about what makes a good/bad abstraction in this space -- people really quickly reach for something like langchain then get sick of abstractions right after that and build their own stuff.
Langchain was released in October 2022. ChatGPT was released in November 2022.
Langchain was before chat models were invented. It let us turn these one-shot APIs into Markov chains. ChatGPT came in and made us realize we didn't want Markov chains; a conversational structure worked just as well.
After ChatGPT and GPT 3.5, there were no more non-chat models in the LLM world. Chat models worked great for everything, including what we used instruct & completion models for. Langchain doing chat models is just completely redundant with its original purpose.
LLM frameworks like LangChain are causing a java-fication or Python .
Do you want a banana? You should first create the universe and the jungle and use dependency injection to provide every tree one at a time, then create the monkey that will grab and eat the banana.
Id just like to point out the source of the Gorilla Banana problem is Joe Armstrong. He really had an amazing way of explain complex problems in a simple way.
Holy moly this was _exactly_ my impression. It seems to really be proliferating and it drives me nuts. It makes it almost impossible to useful things, which never used to be a problem with Python - even in the case of complex projects.
Figuring out how to customize something in a project like LangChain is positively Byzantine.
Langchain was my first real contact with Python development, and it felt worse than Enterprise Java. I didn't know that OOP is so prominent in Python libraries, it looks like many devs are just copying the mistakes from Enterprise Java/.NET projects.
This echoes our experience with LangChain, although we have abandoned it before putting it into production. We found out that for simple use cases it's too complex (as mentioned in the blog), and for complex use cases it's too difficult to adapt. We were not able to identify what is the sweet spot when it is worth it to use it. We felt like we can easily code ourselves most of its functionality very quickly and in a way that fits our requirements.
i've never seen a HN thread where everybody just unanimously agrees and wow I definitely will not be recommending Langchain or using it personally after reading through all the horror stories.
seems like another case of creating busysoftware. doesn't add value, rather takes away value through needless pedantry, but has enough github stars for people to take a look anyways
I think LangChain basically tried to do a land grab, insert itself between developers and LLM's.
But it didn't add significant value and seemed to dress it up by adding abstractions that didn't really make sense.
It was that abstraction gobbledygook smell that made me cautious.
Langchain reminds me of GraphQL. A technology that a lot of ppl seem to hype about, sounds like something you should use because all the cool kids use it, but at the end of the day just makes things unncessarily complicated.
GraphQL actually holds value in my view as it gives custom SQL-like functionality instead of basic JSON APIs. With it, you can do fewer calls and retrieve only the attributes you need. Granted, if SQL were directly an API, then GraphQL wouldn't hold too much value.
I don't know a thing about LangChain so this is a real digression, but I often wonder if people who are critiquing GraphQL do so from the position of only having written GraphQL resolvers by hand.
If so, it would make sense. Because that's not a whole lot of fun. But a GraphQL server-side that is based around the GraphQL Schema Language is another matter entirely.
I've written several applications that started out as proofs of concept and have evolved into production platforms based on this pairing:
It is staggeringly productive, replaces lots of code generation in model queries and authentication, interacts pretty cleanly with ORM objects, and because it's part of the Laravel request cycle is still amenable to various techniques to e.g. whitelist, rate-limit or complexity-limit queries on production machines.
I have written resolvers (for non-database types) and I don't personally use the automatic mutations; it's better to write those by hand (and no different, really, to writing a POST handler).
The rest is an enormous amount of code-not-written, described in a set of files that look much like documentation and can be commented as such.
One might well not want to use it on heavily-used sites, but for intranet-type knowledgebase/admin interfaces that are an evolving proposition, it's super-valuable, particularly paired with something like Nuxt. Also pretty useful for wiring up federated websites, and it presents an extremely rapid way to develop an interface that can be used for pretty arbitrary static content generation.
GraphQL is very powerful when combined with Relay. It’s useless extra bloat if you just use it like REST.
The difference between the two technologies is that LangChain was developed and funded before anyone know what to do with LLMs and GraphQL was internal tooling using to solve a real problem at Meta.
In a lot of ways, LangChain is a poor abstraction because the layer it’s abstracting was (and still is) in it’s infancy.
Evaluating technology based on its "cool kid usage" and a vague sense of complexity is likely not the best strategy. Perhaps instead you could ask "what problems does this solve/create?"
I had the same impression after working through the LangChain tutorials.
The one thing I'd like to ask about is Observability. LangChain has some tools around observability that seem genuinely useful to me, and specific to working with LLMs. Are there ways to use only these tools, or alternative observability tools you recommend for working with LLMs?
Sorry noob question - where can I read more about this "agents" paradigm? Is one agent's output directly calling/invoking another agent? Or there's already fixed graph of information flow with each agent (I presume some prompt presets/templates like "you are an expert this only respond in that") sorts of?
Also, how much success people have or had with automating the E2E tests for their various apps by stringing such agents together themselves
My reading of the article is that because LangChain is abstracted poorly, frameworks should not be used, but that seems a bit far.
my experience is that Python has a frustrating developer experience for production services. So I would prefer a framework with better abstractions and a solid production language (performance and safety), over no framework and Python (if those were options)
I recently unwrapped linktransformer to get access to some intermediate calculations and realized it was a pretty thin wrapper around SentenceTransformer and DBScan. It would have taken me so much longer to get similar results without copying their defaults and IO flow. It’s easy to take for granted code you didn’t have to develop from scratch. It would be interesting if there was a tool that inlined dependency calls and shook out unvisited branches automatically.
It would have been great if the article provided a more realistic example.
The example they use is indeed more complex than the openai equivalent, but LangChain allows you to use several models from several providers.
Also, it's true that the override of the pipe character is unexpected. But it should make sense, if you're familiar with Linux/Unix. And I find it shows more clearly that you are constructing a pipeline:
Genuine question: can someone point me to a use case where langchain makes the problem easier to solve than using the openai/anthropic/ollama SDKs directly? I've gotten a lot of advice to use langchain, but the docs haven't really shown me how it simplifies the task, or at least not more than using an SDK directly.
I really want to at least understand when to use this as a tool but so far I've been failing to figure it out. Some of the things that I tried applying it for:
- Doing a kind of function calling (or at least, implementing the schema validation) for non-gpt models
- parsing out code snippets from responses (and ignoring the rest of the output)
- Having the output of a prompt return as a simple enum without hallucinations
- process a piece of information in multiple steps, like a decision tree, to create structured output about some text (is this a directory listing or a document with content? What category is it? Is it NSFW? What is the reason for it being NSFW?)
I am always suspicious with frameworks. There are two reasons of that. First is that because of the inversion of control they are more rigid than libraries. This is quite fundamental - but there are cases where the trade off is totally worth it. The second one is because of how they are created - it often starts with an application which is then gradually made generic. This is good for advertising - you can always show how useful the framework with an application that uses it. But this "making it generic" is a very tricky process that often fails. It is a top down, the authors need to imagine possible uses and then enable them in the framework - while with libraries the users have much more freedom to discover them in a bottom up process. Users always have surprising ideas.
There are now libraries that cover some of the features of Langchain. There is Instructor and mine LLMEasyTools for function calling, there is LiteLLM for API unification.
Yup. The problem with frameworks is they assume (historically mostly but not always correctly) that layers of abstraction mean one can forget about the layers below. This just doesn't work with LLMs. The systems are closer to biology or something.
IMO LangChain provides very high level abstractions that are very useful for prototyping. It allows you to abstract away components while you dig deeper on some parts that will deliver actual value.
But aside from that, I don't think I would run it in production. If something breaks, I feel like we would be in a world of pain to get things back up and running. I am glad they shared their experience on that, this is an interesting data point.
[+] [-] sc077y|1 year ago|reply
I kept telling them that it works well if you have a standard usage case but the second you need to something a little original you have to go through 5 layers of abstraction just to change a minute detail. Furthermore, you won't really understand every step in the process, so if any issue arises or you need to be improve the process you will start back at square 1.
This is honestly such a boost of confidence.
[+] [-] w4|1 year ago|reply
Most LLM applications require nothing more than string handling, API calls, loops, and maybe a vector DB if you're doing RAG. You don't need several layers of abstraction and a bucketload of dependencies to manage basic string interpolation, HTTP requests, and for/while loops, especially in Python.
On the prompting side of things, aside from some basic tricks that are trivial to implement (CoT, in-context learning, whatever) prompting is very case-by-case and iterative, and being effective at it primarily relies on understanding how these models work, not cargo-culting the same prompts everyone else is using. LLM applications are not conceptually difficult applications to implement, but they are finicky and tough to corral, and something like LangChain only gets in the way IMO.
[+] [-] hobs|1 year ago|reply
[+] [-] jacobsimon|1 year ago|reply
The OpenAI api and others are quite raw, and it’s hard as a developer to resist building abstractions on top of it.
Some people are comparing libraries like Langchain to ORMs in this conversation, but I think maybe the better comparison would be web frameworks. Like, yeah the web/HTML/JSON are “just text” too, but you probably don’t want to reinvent a bunch of string and header parsing libraries every time you spin up a new project.
Coming from the JS ecosystem, I imagine a lot of people would like a lighter weight library like Express that handles the boring parts but doesn’t get in the way.
[+] [-] siva7|1 year ago|reply
[+] [-] ramoz|1 year ago|reply
[+] [-] weakfish|1 year ago|reply
[+] [-] ianschmitz|1 year ago|reply
I ran into similar limitations for relatively simple tasks. For example I wanted access to the token usage metadata in the response. This seems like such an obvious use case. This wasn’t possible at the time, or it wasn’t well documented anyway.
[+] [-] tkellogg|1 year ago|reply
[+] [-] felixfbecker|1 year ago|reply
[+] [-] paraph1n|1 year ago|reply
[+] [-] puppymaster|1 year ago|reply
[+] [-] joseferben|1 year ago|reply
[+] [-] moneywoes|1 year ago|reply
[+] [-] geuis|1 year ago|reply
I was fortunate in that the person I was building the project for was able to introduce me to a few other people more experienced with the entire nascent LLM agent field and both of them strongly steered me away from LangChain.
Avoiding going down that minefield ridden path really helped me out early on, and instead I focused more on learning how to build agents "from scratch" more or less. That gave me a much better handle on how to interact with agents and has led me more into learning how to run the various models independently of the API providers and get more productive results.
[+] [-] SCUSKU|1 year ago|reply
[+] [-] ttul|1 year ago|reply
Kudos to the LangChain folks for building what they built. They deserve some recognition for that. But, yes, I don’t think it’s been particularly helpful for quite some time.
[+] [-] thefourthchime|1 year ago|reply
I ended up calling the model myself and extracting things using a flexible json parser, I ended up doing what I needed with about 80 lines of code.
[+] [-] gazarullz|1 year ago|reply
[+] [-] leobg|1 year ago|reply
Langchain, Pinecone, it’s all the same playbook.
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] hwchase17|1 year ago|reply
I appreciate Fabian and the Octomind team sharing their experience in a level-headed and precise way. I don't think this is trying to be click-baity at all which I appreciate. I want to share a bit about how we are thinking about things because I think it aligns with some of the points here (although this may be worth a longer post)
> But frameworks are typically designed for enforcing structure based on well-established patterns of usage - something LLM-powered applications don’t yet have.
I think this is the key point. I agree with their sentiment that frameworks are useful when there are clear patterns. I also agree that it is super early on and super fast moving field.
The initial version of LangChain was pretty high level and absolutely abstracted away too much. We're moving more and more to low level abstractions, while also trying to figure out what some of these high level patterns are.
For moving to lower level abstractions - we're investing a lot in LangGraph (and hearing very good feedback). It's a very low-level, controllable framework for building agentic applications. All nodes/edges are just Python functions, you can use with/without LangChain. It's intended to replace the LangChain AgentExecutor (which as they noted was opaque)
I think there are a few patterns that are emerging, and we're trying to invest heavily there. Generating structured output and tool calling are two of those, and we're trying to standardize our interfaces there
Again, this is probably a longer discussion but I just wanted to share some of the directions we're taking to address some of the valid criticisms here. Happy to answer any questions!
[+] [-] CharlieDigital|1 year ago|reply
We did some testing with agents for content generation (e.g. "authoring" agent, "researcher" agent, "editor" agent) and found that it was easier to just write it as 3 sequential prompts with an explicit control loop.
It's easier to debug, monitor, and control the output flow this way.
But we still use Semantic Kernel[0] because the lowest level abstractions that it provides are still very useful in reducing the code that we have to roll ourselves and also makes some parts of the API very flexible. These are things we'd end up writing ourselves anyways so why not just use the framework primitives instead?
[0] https://github.com/microsoft/semantic-kernel
[+] [-] Kiro|1 year ago|reply
[+] [-] huevosabio|1 year ago|reply
[+] [-] elijahbenizzy|1 year ago|reply
This sentiment is echoed in this comment in reddit comment as well: https://www.reddit.com/r/LocalLLaMA/comments/1d4p1t6/comment....
Similarly to this post, I think that the "good" abstractions handle application logic (telemetry, state management, common complexity), and the "bad" abstractions make things abstract away tasks that you really need insight into.
This has been a big part of our philosophy on Burr (https://github.com/dagworks-inc/burr), and basically everything we build -- we never want to tell how people should interact with LLMs, rather solve the common problems. Still learning about what makes a good/bad abstraction in this space -- people really quickly reach for something like langchain then get sick of abstractions right after that and build their own stuff.
[+] [-] muzani|1 year ago|reply
Langchain was before chat models were invented. It let us turn these one-shot APIs into Markov chains. ChatGPT came in and made us realize we didn't want Markov chains; a conversational structure worked just as well.
After ChatGPT and GPT 3.5, there were no more non-chat models in the LLM world. Chat models worked great for everything, including what we used instruct & completion models for. Langchain doing chat models is just completely redundant with its original purpose.
[+] [-] fforflo|1 year ago|reply
Do you want a banana? You should first create the universe and the jungle and use dependency injection to provide every tree one at a time, then create the monkey that will grab and eat the banana.
[+] [-] turbocon|1 year ago|reply
https://www.johndcook.com/blog/2011/07/19/you-wanted-banana/
[+] [-] blackkettle|1 year ago|reply
Figuring out how to customize something in a project like LangChain is positively Byzantine.
[+] [-] andix|1 year ago|reply
[+] [-] matusp|1 year ago|reply
[+] [-] localfirst|1 year ago|reply
seems like another case of creating busysoftware. doesn't add value, rather takes away value through needless pedantry, but has enough github stars for people to take a look anyways
[+] [-] captaincaveman|1 year ago|reply
[+] [-] iknownthing|1 year ago|reply
[+] [-] altdataseller|1 year ago|reply
[+] [-] OutOfHere|1 year ago|reply
Langchain has no such benefit.
[+] [-] ecjhdnc2025|1 year ago|reply
If so, it would make sense. Because that's not a whole lot of fun. But a GraphQL server-side that is based around the GraphQL Schema Language is another matter entirely.
I've written several applications that started out as proofs of concept and have evolved into production platforms based on this pairing:
https://lighthouse-php.com https://lighthouse-php-auth.com
It is staggeringly productive, replaces lots of code generation in model queries and authentication, interacts pretty cleanly with ORM objects, and because it's part of the Laravel request cycle is still amenable to various techniques to e.g. whitelist, rate-limit or complexity-limit queries on production machines.
I have written resolvers (for non-database types) and I don't personally use the automatic mutations; it's better to write those by hand (and no different, really, to writing a POST handler).
The rest is an enormous amount of code-not-written, described in a set of files that look much like documentation and can be commented as such.
One might well not want to use it on heavily-used sites, but for intranet-type knowledgebase/admin interfaces that are an evolving proposition, it's super-valuable, particularly paired with something like Nuxt. Also pretty useful for wiring up federated websites, and it presents an extremely rapid way to develop an interface that can be used for pretty arbitrary static content generation.
[+] [-] ahzhou|1 year ago|reply
The difference between the two technologies is that LangChain was developed and funded before anyone know what to do with LLMs and GraphQL was internal tooling using to solve a real problem at Meta.
In a lot of ways, LangChain is a poor abstraction because the layer it’s abstracting was (and still is) in it’s infancy.
[+] [-] nosefurhairdo|1 year ago|reply
[+] [-] isaacphi|1 year ago|reply
[+] [-] wg0|1 year ago|reply
Also, how much success people have or had with automating the E2E tests for their various apps by stringing such agents together themselves
EDIT: Typos
[+] [-] etse|1 year ago|reply
my experience is that Python has a frustrating developer experience for production services. So I would prefer a framework with better abstractions and a solid production language (performance and safety), over no framework and Python (if those were options)
[+] [-] deckar01|1 year ago|reply
[+] [-] elbear|1 year ago|reply
The example they use is indeed more complex than the openai equivalent, but LangChain allows you to use several models from several providers.
Also, it's true that the override of the pipe character is unexpected. But it should make sense, if you're familiar with Linux/Unix. And I find it shows more clearly that you are constructing a pipeline:
[+] [-] bastawhiz|1 year ago|reply
I really want to at least understand when to use this as a tool but so far I've been failing to figure it out. Some of the things that I tried applying it for:
- Doing a kind of function calling (or at least, implementing the schema validation) for non-gpt models
- parsing out code snippets from responses (and ignoring the rest of the output)
- Having the output of a prompt return as a simple enum without hallucinations
- process a piece of information in multiple steps, like a decision tree, to create structured output about some text (is this a directory listing or a document with content? What category is it? Is it NSFW? What is the reason for it being NSFW?)
Any resources are appreciated
[+] [-] zby|1 year ago|reply
There are now libraries that cover some of the features of Langchain. There is Instructor and mine LLMEasyTools for function calling, there is LiteLLM for API unification.
[+] [-] nosefrog|1 year ago|reply
[+] [-] danielmarkbruce|1 year ago|reply
[+] [-] Kydlaw|1 year ago|reply
But aside from that, I don't think I would run it in production. If something breaks, I feel like we would be in a world of pain to get things back up and running. I am glad they shared their experience on that, this is an interesting data point.