top | item 46344908

(no title)

drysart | 2 months ago

All of the caveats basically boil down to "if you need to access the private backing field from anywhere other than the property getter/setter; then be aware it's going to have a funky non C# compliant field name".

In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.

discuss

order

WorldMaker|2 months ago

> In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.

It's another variation of the "parse don't validate" dance. Just because you can do model validation in property setters doesn't always mean it is the best place to do model validation. If you are trying to bypass the setter in a DB Model, then you may have data in your database that doesn't validate, you just want to "parse" it and move on.

It is similar with auto-mapping scenarios, with the complication that automapping was originally meant to be the Validation step in some workflows and code architectures. I think that's personally why AutoMapper and other similar libraries have had a code smell to me as where those tools are often used are "parsing boundaries" more than they should be "validation boundaries" and the coupling between validation logic and AutoMapper logic to me starts to feel like a big ball of spaghetti to me versus a dedicated validation layer that is only concerned with validation not also doing a lot of heavy lifting in copying data around.

pwdisswordfishy|2 months ago

I'm surprised there isn't something pseudorandom thrown in for good measure – like a few digits of a hash of the source file.

Radle|2 months ago

The trick with using characters which by definition are not allowed inside variable names, "<" and ">", should be sufficient no?

WorldMaker|2 months ago

To prevent easy Reflection? It would make debugging harder and make writing a debugger harder, for maybe a small gain of avoiding some user code breaking an encapsulation boundary here or there. (But those serious about using reflection to break encapsulation boundaries would likely build complex workarounds anyway.)

It is the compiler's job to guard encapsulation boundaries in most situations, but it's also not necessarily the compiler's job to guard encapsulation boundaries in all situations. There are a lot of good reasons code may want to marshall/serialize raw data. There are a lot of good reasons where cross-cutting is desirous (logging, debugging, meta-programming), which is a part of why .NET has such rich runtime reflection tools.

kg|2 months ago

I believe the reason for this is that it would break deterministic builds.

SideburnsOfDoom|2 months ago

> be aware it's going to have a funky non C# compliant field name

That's longstanding behaviour. Ever since features such as anonymous types or lambdas arrived, they mean that classes and methods need to be generated from them. And of course these need names, assigned by the compiler. But these names are deliberately not allowed from the code. The compiler allows itself a wider set of names, including the "<>" chars.

I have heard them referred to as "unspeakable names" because it's not that they're unknown, you literally can't say them in the code.

e.g. by Jon Skeet, here https://codeblog.jonskeet.uk/category/async/ from 2013.

> they’re all "unspeakable" names including angle-brackets, just like all compiler-generated names.

materialpoint|2 months ago

Serialization is a pretty good cause.

NetMageSCW|2 months ago

Serialization shouldn’t be dependent on the name of the backing field.