(no title)
mikehollinger | 1 year ago
For example some former colleagues timeseries foundation model (Granite TS) which was doing pretty well when we were experimenting with it. [1]
An aha moment for me was realizing that the way you can think of anomaly models working is that they’re effectively forecasting the next N steps, and then noticing when the actual measured values are “different enough” from the expected. This is simple to draw on a whiteboard for one signal but when it’s multi variate, pretty neat that it works.
[1] https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1
0cf8612b2e1e|1 year ago
[0] https://scikit-learn.org/stable/modules/generated/sklearn.en...
tessierashpool9|1 year ago
mikehollinger|1 year ago
My naive view was that there was some sort of “normalization” or “pattern matching” that was happening. Like - you can look at a trend line that generally has some shape, and notice when something changes or there’s a discontinuity. That’s a very simplistic view - but - I assumed that stuff was trying to do regressions and notice when something was out of a statistical norm like k-means analysis. Which works, sort of, but is difficult to generalize.
apwheele|1 year ago
delusional|1 year ago
> TTM-1 currently supports 2 modes:
> Zeroshot forecasting: Directly apply the pre-trained model on your target data to get an initial forecast (with no training).
> Finetuned forecasting: Finetune the pre-trained model with a subset of your target data to further improve the forecast