logophobia's comments

logophobia | 1 year ago | on: Traits are a local maximum

What is wrong with the following solution?

Any trait implementations where the type and trait are not local are:

* Private, and cannot be exported from a crate

* The local trait implementation always overrides any external implementation

That would solve part of the problem right? Only crate libraries that want to offer trait implementations for external traits/types are not possible, but that might be a good thing.

The solution proposed by the author with implicits is quite complex, I can see why it wasn't chosen.

logophobia | 2 years ago | on: EU data regulator bans personalised advertising on Facebook and Instagram

Because it means that a business (as in, facebook) knows too much about you. It's extremely invasive privacy-wise. Things that could happen:

* Micro-targeting for political advertisements (pretty bad for democracy)

* Dynamic pricing based on demographics (you can afford it, so you pay more)

* Insurance knowing too much about you (rejections based on your health, ensuring parts of the population won't be able to get good insurance)

* And just the fact that too much information being public can be harmful (blackmail, scams, etc)

* etc..

logophobia | 2 years ago | on: The magic of dependency resolution

If your dependency has a security update, how are you going to get that if you copy-paste the code? The thing these dependency managers do well is that they notify you about these types of issues.

.. That said, people need to be very careful about what they add as dependency. Having 1000+ transitive dependencies is just asking for security issues.

logophobia | 2 years ago | on: The Lava Layer Anti-Pattern (2014)

I once, for a short period, maintained web-application that had three different frontend frameworks (angular 1, angular 2, and one other I don't quite remember), and four different javascript builds that had to be run to build the application. Apparently management prioritized the giving demos to investors, and changes to different frameworks were aborted halfway through.

It was completely impossible to work with. Each week the build failed for new random reasons. I hardly dared touch the thing.

This "pattern" is really a failure from multiple parties:

* Managing software engineers is an art, and you really need to understand what is happening to succeed. Only prioritizing short-term goals just ensures you're going to fail in the long-term. Make sure you understand technical debt. The speed of work in bad code-bases versus good code-bases can be orders of magnitude in difference.

* Software engineers really need to use branches properly. Work that is halfway done should not be in the main branch. Consistency and simplicity is king here. Maintaining an old and new version of software for a while can be a pain, but it's much better than maintaining a halfway converted application. Pressure from management is no reason to release stuff halfway done. And if you need to demo, release a specific branch.

Nowadays I don't even ask to do necessary maintenance. It's just part of the job. Always stick to the boyscout rule (leave things in a better state then you found it). Make your code-bases cleaner incrementally, and eventually you'll be in a much better state.

logophobia | 2 years ago | on: Hypersonic missiles are misunderstood

Or have a lower melting point, making it more practical to use some sort of heat-resistant armoring? Might be pretty difficult to make something that both reflects light and is heat-resistant, no mirror reflects everything.

logophobia | 2 years ago | on: Temporal quality degradation in AI models

The typical setup is very simple:

* Take the previous model checkpoint, retrain/finetune it on a window with new data. You typically don't want to retrain everything from scratch, saves time and money. For large models you need specialized GPUs to train them, so typically the training happens separately. * Check the model statistics in depth. We look at way more statistics then just the loss function. * Check actual examples of the model in action * Check the data quality. If the data is bad, then you're just amplifying human mistakes with a model. * Push it to production, monitor the result

MLOps practice differs from to team to team, this checklist isn't universal, just one possible approach. Everyone does things a little differently.

> I still think of how humans work. We don't get retrained from time to time to improve, we learn continually as we gain experience. It should be doable in at least some cases, like classification where it's easy to tell if a label is right or wrong.

For some models, like fraud, correctness is important. Those models need a lot of babysitting. For humans, think about how the average facebooker reacts to misinformation, you don't want that to happen with your model.

Other models are ok with more passive monitoring, things like recommendation systems.

Continuous online training can be done. Maybe take a look at reinforcement learning? It's not widely applied, has some limitations, but also some interesting applications. These types of things might become more common in the future.

logophobia | 2 years ago | on: Temporal quality degradation in AI models

That's a pretty standard part of MLOps. I have a fraud model in production, it's being incrementally retrained each week, on a sliding window of data for the last x-months.

You can do it "online", which works for some models, but for most need monitoring to make sure they don't go off the rails.

logophobia | 2 years ago | on: From deep to long learning?

I've applied the S4 operator to successfully do long-length video classification. It's massively more efficient than a similarly scaled transformer, but it doesn't train as well. Still, even with S4 I got some impressive results, looking forward to more.

logophobia | 3 years ago | on: Show HN: Parsnip – Duolingo for Cooking

Tried a quiz. I'd be nice if there was more explanation about the things I missed. I get a question about naming ingredients. I get some wrong. Afterwards it might be nice to get an explanation about the missed items so I don't have to google myself. Even an in-app link to wikipedia would be enough. I also suspect that most things I got wrong were because of language differences, so translations might be nice as well.

logophobia | 6 years ago | on: A Sad Day for Rust

I didn't mean legal responsibility here (perhaps the example was somewhat poorly chosen), but surely there's some level of responsibility here? Bugs happen, security issues happen, facts of life, but actively rejecting security patches is another level of irresponsibility.

logophobia | 6 years ago | on: A Sad Day for Rust

It was advertised as a production-ready web-framework, and it was very popular. When do people get to complain? "Oh, my credit card information was stolen due to memory issues in this web-service, it's fine though, we didn't pay the guy, so we can't blame him.". Web-frameworks are cornerstones for security, and if you write one, advertise one, you need to care about security. Features, code-style, ad-hoc PRs, bug-fixes: little responsibility there, but security is something that can hurt a lot of people if done wrong. The use-after-free bug this was about could've been exploited in the right circumstances.

If I build a playground for free and it gets popular in my neighbourhood, and then collapses on some poor kid, I'm still responsible, even if I did it for free.

The way it was handled was definitely NOT productive though, the guy didn't deserve the flames.

logophobia | 6 years ago | on: Actix project postmortem

> However, that doesn't mean there wasn't a problem. There was a ton of negativity around "unsafe" when the author first released the code, and it has kind of become a meme at this point. If a project consistently uses code in an unsafe way, is it really worth spending your time vetting it for your production use case? There are plenty of web severs out there, pick one that aligns with your goals.

Should people wait until credit-card data or PII is leaked due to security vulnerabilities? The problem with security is that it impacts more than just the programmers using the framework, it impacts everyone. Does the author deserve the nastiness? No. Do security issues need to be reported, and if not fixed, called out? Yes, for big and advertised projects issues like that need to be reported. If not, there will be users that would naively expect the web-framework they're using to be somewhat secure.

The framework had a professional looking website advertising the project, it had good documentation, a user-friendly API. It advertised a actix open-source community. Had over a million downloads. I would say that expecting actix to be run like a somewhat professional project is not a strange assumption.

The way it was called out was pretty terrible though, and I doubt anyone is happy with what happened.

logophobia | 6 years ago | on: Actix project postmortem

It's not coding style, it's refusing to investigate use-after-free vulnerabilities in code because it's "boring". Noone should care if a maintainer uses tabs or spaces, or has weird variable naming, but if "coding-style" leads to security issues (due to hand-rolling unsafe memory-primitives), then it is an actual issue, especially if it's for a popular web-framework.

Don't like it, don't use it doesn't really apply to security vulnerabilities in such major packages.

Flaming and personal attacks are not the solution to this stuff, but this drama feels somewhat self-inflicted.

page 1