logophobia | 18 days ago | on: Wikipedia was in read-only mode following mass admin account compromise
logophobia's comments
logophobia | 1 year ago | on: Traits are a local maximum
Any trait implementations where the type and trait are not local are:
* Private, and cannot be exported from a crate
* The local trait implementation always overrides any external implementation
That would solve part of the problem right? Only crate libraries that want to offer trait implementations for external traits/types are not possible, but that might be a good thing.
The solution proposed by the author with implicits is quite complex, I can see why it wasn't chosen.
logophobia | 2 years ago | on: EU data regulator bans personalised advertising on Facebook and Instagram
* Micro-targeting for political advertisements (pretty bad for democracy)
* Dynamic pricing based on demographics (you can afford it, so you pay more)
* Insurance knowing too much about you (rejections based on your health, ensuring parts of the population won't be able to get good insurance)
* And just the fact that too much information being public can be harmful (blackmail, scams, etc)
* etc..
logophobia | 2 years ago | on: The magic of dependency resolution
.. That said, people need to be very careful about what they add as dependency. Having 1000+ transitive dependencies is just asking for security issues.
logophobia | 2 years ago | on: The Lava Layer Anti-Pattern (2014)
It was completely impossible to work with. Each week the build failed for new random reasons. I hardly dared touch the thing.
This "pattern" is really a failure from multiple parties:
* Managing software engineers is an art, and you really need to understand what is happening to succeed. Only prioritizing short-term goals just ensures you're going to fail in the long-term. Make sure you understand technical debt. The speed of work in bad code-bases versus good code-bases can be orders of magnitude in difference.
* Software engineers really need to use branches properly. Work that is halfway done should not be in the main branch. Consistency and simplicity is king here. Maintaining an old and new version of software for a while can be a pain, but it's much better than maintaining a halfway converted application. Pressure from management is no reason to release stuff halfway done. And if you need to demo, release a specific branch.
Nowadays I don't even ask to do necessary maintenance. It's just part of the job. Always stick to the boyscout rule (leave things in a better state then you found it). Make your code-bases cleaner incrementally, and eventually you'll be in a much better state.
logophobia | 2 years ago | on: Hypersonic missiles are misunderstood
logophobia | 2 years ago | on: Unlimiformer: Long-Range Transformers with Unlimited Length Input
logophobia | 2 years ago | on: Temporal quality degradation in AI models
* Take the previous model checkpoint, retrain/finetune it on a window with new data. You typically don't want to retrain everything from scratch, saves time and money. For large models you need specialized GPUs to train them, so typically the training happens separately. * Check the model statistics in depth. We look at way more statistics then just the loss function. * Check actual examples of the model in action * Check the data quality. If the data is bad, then you're just amplifying human mistakes with a model. * Push it to production, monitor the result
MLOps practice differs from to team to team, this checklist isn't universal, just one possible approach. Everyone does things a little differently.
> I still think of how humans work. We don't get retrained from time to time to improve, we learn continually as we gain experience. It should be doable in at least some cases, like classification where it's easy to tell if a label is right or wrong.
For some models, like fraud, correctness is important. Those models need a lot of babysitting. For humans, think about how the average facebooker reacts to misinformation, you don't want that to happen with your model.
Other models are ok with more passive monitoring, things like recommendation systems.
Continuous online training can be done. Maybe take a look at reinforcement learning? It's not widely applied, has some limitations, but also some interesting applications. These types of things might become more common in the future.
logophobia | 2 years ago | on: Temporal quality degradation in AI models
You can do it "online", which works for some models, but for most need monitoring to make sure they don't go off the rails.
logophobia | 2 years ago | on: From deep to long learning?
logophobia | 2 years ago | on: Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows
Both using a hierarchical transformer, adapting the transformer network architecture to vision tasks more efficiently.
logophobia | 3 years ago | on: NordVPN library and client code open-sourced
A VPN provider really needs a lot of trust, easy to lose that.
logophobia | 3 years ago | on: Plex: Important notice of a potential data breach
logophobia | 3 years ago | on: Show HN: Parsnip – Duolingo for Cooking
logophobia | 3 years ago | on: The overengineered solution to my pigeon problem
Will make things a little more robust (and overengineered!).
logophobia | 6 years ago | on: Bocker – Docker implemented in around 100 lines of Bash (2015)
logophobia | 6 years ago | on: A Sad Day for Rust
logophobia | 6 years ago | on: A Sad Day for Rust
If I build a playground for free and it gets popular in my neighbourhood, and then collapses on some poor kid, I'm still responsible, even if I did it for free.
The way it was handled was definitely NOT productive though, the guy didn't deserve the flames.
logophobia | 6 years ago | on: Actix project postmortem
Should people wait until credit-card data or PII is leaked due to security vulnerabilities? The problem with security is that it impacts more than just the programmers using the framework, it impacts everyone. Does the author deserve the nastiness? No. Do security issues need to be reported, and if not fixed, called out? Yes, for big and advertised projects issues like that need to be reported. If not, there will be users that would naively expect the web-framework they're using to be somewhat secure.
The framework had a professional looking website advertising the project, it had good documentation, a user-friendly API. It advertised a actix open-source community. Had over a million downloads. I would say that expecting actix to be run like a somewhat professional project is not a strange assumption.
The way it was called out was pretty terrible though, and I doubt anyone is happy with what happened.
logophobia | 6 years ago | on: Actix project postmortem
Don't like it, don't use it doesn't really apply to security vulnerabilities in such major packages.
Flaming and personal attacks are not the solution to this stuff, but this drama feels somewhat self-inflicted.