> BNNs bring the following advantages over GPs: First, training large GPs is computationally expensive, and traditional training algorithms scale as the cube of the number of data points in the time series. In contrast, for a fixed width, training a BNN will often be approximately linear in the number of data points. Second, BNNs lend themselves better to GPU and TPU hardware acceleration than GP training operations.
If I'm not mistaken Hilbert Space Gaussian Processes (HSGPs) are O(mn+m) (where m is the number of basis functions, often something like m=30, m=60, or m=100), which is also a huge improvement over conventional GPs' O(n^3). I know that there are some constraints on HSGPs (e.g. they work best with stationary time series, and they're not quite as accurate, flexible, or readily interpretable or tunable as conventional GPs), but what would be the argument for an AutoBNN over an HSGP? Is it mainly about the lack of a need for domain expert input?
Damn, this is like the fifth time series framework posted this week.
This one seems theoretically more interesting than some others but practically less useful. For one, who wants to do stuff in tensorflow anymore let alone tensorflow-probability. Tp has had ample time to prove its worth and from what I can tell pretty much no one is using it because of a worst of both worlds problem—DL community prefers pytorch and stats community prefers Stan.
I’m starting to feel like time series and forecasting research is just going off the rails as every company is trying to jump on the DL/LLM hype train and try to tell us that somehow neural nets know something we don’t about how to predict the future based on the past.
> This one seems theoretically more interesting than some others but practically less useful.
Are there other factors why you think AutoBNN is not practically useful, apart from being based on the wrong foundation (which was a mistaken belief of yours)?
I used to work at National Grid, and we had a non-parametric mixture model developed by some very smart quants. It was used to predict gas escapes, and would take into account the current SLA performance (aka the cost function) to predict how many engineers we would need to reserve to respond. The main variables were weather related (temperature, wind) as well as some obvious modifiers like public holidays.
[+] [-] HuShifang|1 year ago|reply
> BNNs bring the following advantages over GPs: First, training large GPs is computationally expensive, and traditional training algorithms scale as the cube of the number of data points in the time series. In contrast, for a fixed width, training a BNN will often be approximately linear in the number of data points. Second, BNNs lend themselves better to GPU and TPU hardware acceleration than GP training operations.
If I'm not mistaken Hilbert Space Gaussian Processes (HSGPs) are O(mn+m) (where m is the number of basis functions, often something like m=30, m=60, or m=100), which is also a huge improvement over conventional GPs' O(n^3). I know that there are some constraints on HSGPs (e.g. they work best with stationary time series, and they're not quite as accurate, flexible, or readily interpretable or tunable as conventional GPs), but what would be the argument for an AutoBNN over an HSGP? Is it mainly about the lack of a need for domain expert input?
[+] [-] melondonkey|1 year ago|reply
This one seems theoretically more interesting than some others but practically less useful. For one, who wants to do stuff in tensorflow anymore let alone tensorflow-probability. Tp has had ample time to prove its worth and from what I can tell pretty much no one is using it because of a worst of both worlds problem—DL community prefers pytorch and stats community prefers Stan.
I’m starting to feel like time series and forecasting research is just going off the rails as every company is trying to jump on the DL/LLM hype train and try to tell us that somehow neural nets know something we don’t about how to predict the future based on the past.
[+] [-] leventov|1 year ago|reply
> For one, who wants to do stuff in tensorflow anymore let alone tensorflow-probability.
AutoBNN is a JAX library and has nothing to do technically with TF Probability. It was developed by the TF Probability team.
> DL community prefers pytorch and stats community prefers Stan.
It looks like the JAX ecosystem for stats is growing: NumPyro is based on JAX, PyMC has a JAX backend, https://github.com/blackjax-devs/blackjax has effective samplers, there is https://github.com/jax-ml/bayeux, and now AutoBNN.
> This one seems theoretically more interesting than some others but practically less useful.
Are there other factors why you think AutoBNN is not practically useful, apart from being based on the wrong foundation (which was a mistaken belief of yours)?
[+] [-] chaz6|1 year ago|reply