(no title)
eulers_secret | 27 days ago
This absolutely sucks, especially since tool calling uses tokens really really fast sometimes. Feels like a not-so-gentle nudge to using their 'official' tooling (read: vscode); even though there was a recent announcement about how GHCP works with opencode: https://github.blog/changelog/2026-01-16-github-copilot-now-...
No mention of it being severely gimped by the context limit in that press release, of course (tbf, why would they lol).
However, if you go back to aider, 128K tokens is a lot, same with web chat... not a total killer, but I wouldn't spend my money on that particular service with there being better options!
perryh2|27 days ago
timr|19 days ago
My experience is that the models all lose focus long before they fill their context window, so I'm not crying over the lower limit.