davidfowl's comments

davidfowl | 1 month ago | on: Tally – A tool to help agents classify your bank transactions

Author here. I wrote up the background and motivation in more detail here: https://medium.com/@davidfowl/tally-52f4b257b32a

The short version: Tally is not doing LLM-based classification at runtime. It’s a local, deterministic rule engine. Rules live in files, run offline, and are fully inspectable and hopefully explainable.

LLMs are optional and only used to help author and refine rules, because the hard part isn’t applying regex — it’s maintaining and evolving rule sets as new messy merchant strings show up. Once rules exist, there are zero model calls.

This grew out of me initially using coding agents to generate one-off scripts for my own CSVs. That worked, but all the logic lived in prompts. The pivot was realizing the rules are the real artifact worth keeping and sharing.

If you want to hand-write rules, run offline, or use local models, that all works. Docs with the concrete workflow are here: https://tallyai.money/guide.html

Happy to answer concrete questions.

davidfowl | 1 year ago | on: General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development

This is great!

Aspire has a code-based application model that is used to represent your application (or a subset of your application) and its dependencies. This can be made up of containers, executables, cloud resources and you can even build your own custom resources.

During local development, we submit this object model to the local orchestrator and launch the dashboard. This orchestrator is optimized for development scenarios and integrates with debuggers from various IDEs (e.g. VS, VS code, Rider etc, it's an open protocol).

For deployment, we can take this application model to produce a manifest that (which is basically is a serialized version of the app model with references). Other tools can use this manifest to translate these aspire native assets into deployment environment specific assets. See https://learn.microsoft.com/en-us/dotnet/aspire/deployment/m...

This is how we support Kubernetes, azure, eventually AWS etc. Tools translate this model to their native lingua franca.

Longer term, we will also expose an in-process model for transforming and emitting whatever manifest format you like.

davidfowl | 1 year ago | on: Asynchronous Programming in C#

If I made mistakes feel free to file an issue or even send me a PR. It's open source! That said, you're right that I don't say how best to call sync from async methods (the latter is more difficult as there's no good way do it well).

davidfowl | 3 years ago | on: Performance Improvements in .NET 7

Thanks for telling me :). On a serious note though, anything can work with source generators but it doesn't match the style of coding that we'd like (moving everything to be declarative isn't the path we want to go down for certain APIs). Also source generators don't compose, so any source generation that we would want to use would need to take advantage of the JSON source generator (if we wanted to keep things NativeAOT safe). Right now most of the APIs you use in ASP.NET Core are imperative and source generators cannot change the callsite so you need to resort to method declarations and attributes everywhere.

That's not an optimization, that's a programming model change.

davidfowl | 3 years ago | on: Performance Improvements in .NET 7

We've been experimenting with NativeAOT for years with ASP.NET Core (which does runtime code generation all over the place). The most promising prototypes thus far are:

- https://github.com/davidfowl/FasterActions - https://github.com/davidfowl/uController

They are source generated version of what ASP.NET does today (in both MVC and minimal APIs). There are some ergonomic challenges with source generators that we'll likely be working through over the coming years so don't expect magic. Also its highly unlikely that ASP.NET Core will not depend on any form of reflection. Luckily, "statically described" reflection generally works fine with NativeAOT.

Things like configuration binding, DI, Logging, MVC, JSON serialization all rely on some form of reflection today and it will be non-trivial to remove all of it but we can get pretty far with NativeAOT if we accept some of the constraints.

As of right now, we're trying to make sure "motivated people" can play with it, but it's not something that is supported by ASP.NET Core or EF at the moment. https://github.com/dotnet/aspnetcore/pulls?q=is%3Apr+nativea...

PS: Some of the challenges https://github.com/dotnet/aspnetcore/issues/42221

davidfowl | 4 years ago | on: .NET Myths Dispelled

JamesNK also answered this. We started off with just nuget packages, it was beautiful for about 5 minutes until we ended up with ~300 in the default project. Then physics kicked in, slower build times, slower compilation times, slower intellisense. All of those old O(N/N^2) algorithms started to show up on profiles and we had to do something about it. That was just the practical performance side of things, then there was the customer confusion around which packages had which APIs. We offered a .NET buffet that customers hated. On top of that, the versioning got nuts. Each of those packages could in theory version independently, who is going to test all of those combinations of things? What happens when you need tot publish ~300 + packages to your server deployment because you didn't want to "install the framework"? You'd be complaining that they were too many assemblies (which people did). Amplify that by deploying these binaries to the same physical machine when running multiple .NET Core applications there (very popular for IIS setups). We pre-JIT (ready to run) the core libraries and ASP.NET to improve startup time, that makes the assemblies bigger (as they contain both native code and IL), this makes your applications bigger by default.

We got LOTS of feedback that this was all really terrible and we listened.

We did this from .NET Core's inception to .NET Core 3.0 when we pulled the plug. We set things up so that the base install/platform/framework was not composed of packages but framework references. We merged several assemblies together to get rid of some of the unnecessary layering. We invented shared frameworks so that people could install the framework once and run lots of applications using shared libraries so that:

- Customers have faster publish times as you only need to deploy your application bits, the framework can be pre-installed - Loading the same dll on disk into multiple processes allows for more virtual memory sharing (a handy performance optimization) - We could version the set (.NET, ASP.NET Core) as a coherent unit - We could pre-JIT (R2R) the built in stuff so it's installed on the machine once and usable by many apps.

As for being intuitive, the default experience is to use the Web SDK. I didn't even get into SDKs but it does more than default the framework reference. It also exposes capabilities that tooling use to light up behaviors in the build and in the IDE.

PS: This stuff is harder than it looks on the surface and we spend lots of time and take lots of care designing it (making the typical tradeoffs you make when doing software engineering).

davidfowl | 4 years ago | on: Microsoft YARP

One thing I'd like to add as a potential differentiator as well is that YARP runs very well on Windows and because it's build on ASP.NET Core, can run inside of IIS and directly on HTTP.sys as well (which means we can take advantage of cool features like http.sys request delegation where possible https://github.com/microsoft/reverse-proxy/commit/b9c13dbde9...). This means you get platform portability AND deep platform integration for free.

davidfowl | 4 years ago | on: Microsoft YARP

It supports WebSockets, HTTP/2 (including gRPC) and HTTP/3 (with .NET 6+).

davidfowl | 4 years ago | on: Microsoft YARP

Extensibility via C# for custom rules. It's not as attractive if you have nothing to customize but we've found lots of developers want to write code to influence the proxying. If you're a .NET developer or you don't like writing LUA (what nginx uses) then you can use YARP.
page 1