This is cool, it looks to me like you're integrating static analysis on the user's codebase and the underlying dependency. Very curious to see where it goes.
We've found dependency upgrades to be deceptively complex to evaluate safety for. Often you need context that's difficult or impossible to determine statically in a dynamically typed language. An example I use for Ruby is the kwarg migration from ruby 2.7->3 (https://www.ruby-lang.org/en/news/2019/12/12/separation-of-p...). It's trivial to profile for impacted sites at runtime but basically impossible to do it statically without adopting something like sorbet. Do you have any benchmarks on how reliable your evaluations are on plain JS vs. typescript codebases?
We ended up embracing runtime profiling for deprecation warnings / breaking changes as part of upgrading dependencies for our customers and have found that context to unlock more reliable code transformations. But you're stuck building an SDK for every language you want to support, and it's more friction than installing a github app.
One would imagine they are broadly similar; but that's off the assumption that codebases are similar as well.
Migrations between versions can have big variance largely as a function of the parent codebase and not the dependency change. A simple example of this would be a supported node version bump. It's common to lose support for older node runtimes with new dependency versions, but migrating the parent codebase may require large custom efforts like changing module systems.
GitHub PM here. We have tried this, but we weren't able to get results that we were satisfied with. Of course, you have to revisit these things regularly, as the models and wider state of the art are evolving so quickly!
stevepike|5 months ago
We've found dependency upgrades to be deceptively complex to evaluate safety for. Often you need context that's difficult or impossible to determine statically in a dynamically typed language. An example I use for Ruby is the kwarg migration from ruby 2.7->3 (https://www.ruby-lang.org/en/news/2019/12/12/separation-of-p...). It's trivial to profile for impacted sites at runtime but basically impossible to do it statically without adopting something like sorbet. Do you have any benchmarks on how reliable your evaluations are on plain JS vs. typescript codebases?
We ended up embracing runtime profiling for deprecation warnings / breaking changes as part of upgrading dependencies for our customers and have found that context to unlock more reliable code transformations. But you're stuck building an SDK for every language you want to support, and it's more friction than installing a github app.
rohitpaulk|5 months ago
(a) they’re broadly similar across companies,
(b) they aren’t time-sensitive, so the agent can take hours without anyone noticing, and
(c) customers are already accustomed to using bots here, just bad ones
XiZhao|5 months ago
Migrations between versions can have big variance largely as a function of the parent codebase and not the dependency change. A simple example of this would be a supported node version bump. It's common to lose support for older node runtimes with new dependency versions, but migrating the parent codebase may require large custom efforts like changing module systems.
poetril|5 months ago
0: https://fossabot.com/
johnnyyw|5 months ago
timrogers|5 months ago
robszumski|5 months ago
And, as someone who's start up (EdgeBit was acquired by FOSSA recently) wrote a new JS/TS static analysis engine, it's just hard to get correct.
zingababba|5 months ago
chadfurman|5 months ago
SkyPuncher|5 months ago
gregjw|5 months ago
cchance|5 months ago
jamietanna|5 months ago
Where did you see that? I must've missed it in the announcement
jamietanna|5 months ago
jamietanna|5 months ago
(I'm one of the maintainers on Renovate)
ai-christianson|5 months ago