We use this to power things like find-references or jump-to-def, "symbol search" and autocomplete, or more complicated code queries and analysis (even across languages). Imagine rich LSPs without a local checkout, web-based code queries, or seeding fuzzers and static analyzers with entry points in code.
Our focus has been on very large scale, multi-language code indexing, and then low latency (e.g. hundreds of micros) query times, to drive highly interactive developer workflows.
I'm really struggling to understand what Glean does, and why I would use it. Most important: Your landing page should quickly show what Glean does that a typical IDE (Visual Studio, Visual Studio Code, Eclipse, ect, does.)
Specifically, things like "Go to definition," and tab completion have been in industry-leading IDEs for at least 20 years.
What's novel about Glean? It seems like a lot of hoops to jump through when Visual Studio (and Visual Studio Code) can index a very large codebase in a few seconds. (And don't require a server and database to do it.)
Perhaps a 20-second video (no sound) showing what Glean does that other IDEs don't will help get the message across?
I see you support Thrift and Buck. Would you also be interested in adding Proto and Bazel support? Being able to query the code based on the build graph (sort of) would be very cool.
Briefly skimmed the docs and it noted that it doesn't store expressions from the parsed AST. That means it's mostly a symbol lookup system?
When doing large system refactoring searching by code patterns is the number one thing I'd like to have a tool for. For example being able to query for all for loops in a codebase that have a call to function X within their body.
Since this is HN, could you please share more technical/impl details, e.g. what makes it more scalable and faster in general and also compared to other similar engines?
Feature request: a live demo! I would love to try out the web interface described at https://glean.software/docs/trying without pulling down a 7GB Docker image first.
I had a look at the site and it seems to be parsing source code in multiple languages and storing the parsed "syntax trees" into a database for querying.
I would love to know what the usecase for this tool is aside from maybe being a source for presentations? (We have 5 million if statements).
How can this be used to improve code quality or any other aspect of the code lifecycle?
Or is it solving problems in a completely different problem area?
Glean is focused on storing and querying data about the code. The idea is that you have your own program to collect that data, then you use Glean to store that compactly and to have snappy queries.
You would create entries like "this is a declaration of X", "this is a use of X". Then you can query things like "give me all uses of X" in sub-millisecond time. You hook that up to an LSP server then you get almost zero-cost find-references, jump-to-definition, etc. The snappy queries also mean it becomes possible to perform whole codebase (and cross-language) analysis. That is, answering questions like "what code is not referenced from this root?", "does this Haskell function use anything that calls malloc?" (analysis through the ffi barrier).
One can also attach all kinds of information from different sources to code entities, not only things derived from the source itself. You add things like run-time costs, frequency of use, common errors, etc, and an LSP server could make all of it available right in your editor.
For very large or complex codebases, where it is just too expensive or too complicated to calculate this information locally a system like this becomes very useful.
Kythe has one schema, whereas with Glean each language has its own schema with arbitrary amounts of language-specific detail. You can get a language-agnostic view by defining an abstraction layer as a schema. Our current (work in progress) language-agnostic layer is called "codemarkup" https://github.com/facebookincubator/Glean/blob/main/glean/s...
For wiring up the indexer, there are various methods, it tends to depend very much on the language and the build system. For Flow for example, Glean output is just built into the typechecker, you just run it with some flags to spit out the Glean data. For C++, you need to get the compiler flags from the build system to pass to the Clang frontend. For Java the indexer is a compiler plugin; for Python it's built on libCST. Some indexers send their data directly to a Glean server, others generate files of JSON that get sent using a separate command-line tool.
References use different methods depending on the language. For Flow for example there is a fact for an import that matches up with a fact for the export in the other file. For C++ there are facts that connect declarations with definitions, and references with declarations.
Datalog-ish query languages sure is a fun area to be working in. Such DSLs exist for various domains and, like Semmle's codeQL or the more academic Soufflé, Glean focuses on the domain of programming languages.
Glean seems to still be work in progress, e.g. no support for recursive queries yet, but I wonder where they're heading. I'll certainly keep an eye on the project but I wonder how exactly Glean aims to -- or maybe it already does -- improve upon the alternatives? From the talk linked in another comment I guess the distinctive feature may be the planned integration with IDEs. Correct me if I'm wrong. Other contenders provide great querying technology but there is indeed no strong focus on making such tech really convenient and integrated yet.
I think the point in the space Glean hits well is efficiency/latency (enough to power real time editing, like in IDE autocomplete or navigation), while having a schema and query language generic enough to do multiple languages and code-like things. You can accurately query JavaScript or Rust or PHP or Python or C++ with a common interface, which is a bit nuts :D
This seems very interesting, would love to see more alternatives to TreeSitter and microsoft LSP - what makes those hard to use is lack of examples and tutorials. So I hope tbere will be examples and tutorials. For example: How do you find all variables in scope when the text cursor is on line x and col y in /file/path/file.js
Very cool! How does this differ algorithmically from the trigram based search that everything uses from google code search from like 20 years ago?
And continuing off of that theme in practical terms how does it stand up against zoekt?
I’m curious because zoekt is kind of slow when it comes to ingesting large amounts of code like all of the publicly available code on GitHub
The few people using that commercially have basically had to spend a lot of time rewriting parts of it to make their goal of public codesearch for all attainable.
I and a few people I know are pretty convinced that there are better and easier ways / technologies to make that happen.
Great job with this. What's your roadmap for releasing some of the tooling for editor integration? Really, the question is should I build something or wait a few weeks?
We have used SciTools Understand to do this on local source code. What is the use of putting this in the cloud? The website doesn't really explain that.
The problem is that TypeScript does not scale to the size of the giant monorepo at Facebook, with hundreds of thousands, if not millions of files. Since they aren't organized into packages, it is just one giant flat namespace (any JS file can import any other JS file by the filename). It is pretty amazing to change a core file and see type errors across the entire codebase in a few seconds. The main way to scale in TypeScript is Project References, which don't work when you haven't separated your code into packages. (Worked at Facebook until June 2021).
I was recently looking for a library that takes a few lines of source code as input, and predicts the programming language as output.
That seems like a very tractable machine learning problem, yet all I could find was a single python library which looks nice, but doesn't have much adoption, and requires installing the entirety of tensorflow despite the fact that users just want a trained model and a predict() function.
dons|4 years ago
Our focus has been on very large scale, multi-language code indexing, and then low latency (e.g. hundreds of micros) query times, to drive highly interactive developer workflows.
gwbas1c|4 years ago
Specifically, things like "Go to definition," and tab completion have been in industry-leading IDEs for at least 20 years.
What's novel about Glean? It seems like a lot of hoops to jump through when Visual Studio (and Visual Studio Code) can index a very large codebase in a few seconds. (And don't require a server and database to do it.)
Perhaps a 20-second video (no sound) showing what Glean does that other IDEs don't will help get the message across?
gravypod|4 years ago
mhitza|4 years ago
When doing large system refactoring searching by code patterns is the number one thing I'd like to have a tool for. For example being able to query for all for loops in a codebase that have a call to function X within their body.
progval|4 years ago
And what would be the disk and memory requirements for this? Could they be distributed across a handful of servers?
zerr|4 years ago
soonnow|4 years ago
the_duke|4 years ago
Seems like there are only indexers for Flow and Hack though.
Will there be more indexers built by Facebook, or will it rely on community contributions?
pdpi|4 years ago
simonw|4 years ago
jamessb|4 years ago
unknown|4 years ago
[deleted]
conductor|4 years ago
[0] https://docs.telemetry.mozilla.org/concepts/glean/glean.html
[1] https://github.com/mozilla/glean/
senden9|4 years ago
unknown|4 years ago
[deleted]
soonnow|4 years ago
I would love to know what the usecase for this tool is aside from maybe being a source for presentations? (We have 5 million if statements).
How can this be used to improve code quality or any other aspect of the code lifecycle?
Or is it solving problems in a completely different problem area?
lazamar|4 years ago
You would create entries like "this is a declaration of X", "this is a use of X". Then you can query things like "give me all uses of X" in sub-millisecond time. You hook that up to an LSP server then you get almost zero-cost find-references, jump-to-definition, etc. The snappy queries also mean it becomes possible to perform whole codebase (and cross-language) analysis. That is, answering questions like "what code is not referenced from this root?", "does this Haskell function use anything that calls malloc?" (analysis through the ffi barrier).
One can also attach all kinds of information from different sources to code entities, not only things derived from the source itself. You add things like run-time costs, frequency of use, common errors, etc, and an LSP server could make all of it available right in your editor.
For very large or complex codebases, where it is just too expensive or too complicated to calculate this information locally a system like this becomes very useful.
coderdd|4 years ago
One of the pain points using Kythe is wiring up the indexer to the build system. Would Glean indexers be easier to wire up for the common cases?
Other is the index post-processing, which is not very scalable in the open source version (due to go-beam having rough Flunk support, for example).
Third, how does it link up references across compilation units? Is it heuristic, or relies on unique keys from indexers matching? Or across languages?
simonmar|4 years ago
For wiring up the indexer, there are various methods, it tends to depend very much on the language and the build system. For Flow for example, Glean output is just built into the typechecker, you just run it with some flags to spit out the Glean data. For C++, you need to get the compiler flags from the build system to pass to the Clang frontend. For Java the indexer is a compiler plugin; for Python it's built on libCST. Some indexers send their data directly to a Glean server, others generate files of JSON that get sent using a separate command-line tool.
References use different methods depending on the language. For Flow for example there is a fact for an import that matches up with a fact for the export in the other file. For C++ there are facts that connect declarations with definitions, and references with declarations.
balddenimhero|4 years ago
Glean seems to still be work in progress, e.g. no support for recursive queries yet, but I wonder where they're heading. I'll certainly keep an eye on the project but I wonder how exactly Glean aims to -- or maybe it already does -- improve upon the alternatives? From the talk linked in another comment I guess the distinctive feature may be the planned integration with IDEs. Correct me if I'm wrong. Other contenders provide great querying technology but there is indeed no strong focus on making such tech really convenient and integrated yet.
dons|4 years ago
doddsiedodds|4 years ago
simonmar|4 years ago
booleandilemma|4 years ago
_tom_|4 years ago
aabaker99|4 years ago
How do I write a schema and indexer for my favorite programming language that isn't currently (and won't be) supported with official releases?
For Schemas, [1] says to modify (or base new ones off) these: https://github.com/facebookincubator/Glean/tree/main/glean/s...
For Indexers, it's a little less clear but it looks like I need to write my own type checker?
[1] https://glean.software/docs/schema/workflow
z3t4|4 years ago
Grimm1|4 years ago
And continuing off of that theme in practical terms how does it stand up against zoekt?
I’m curious because zoekt is kind of slow when it comes to ingesting large amounts of code like all of the publicly available code on GitHub
The few people using that commercially have basically had to spend a lot of time rewriting parts of it to make their goal of public codesearch for all attainable.
I and a few people I know are pretty convinced that there are better and easier ways / technologies to make that happen.
ExtraE|4 years ago
ctvo|4 years ago
metalliqaz|4 years ago
tclancy|4 years ago
simonmar|4 years ago
log101|4 years ago
sealeck|4 years ago
avinassh|4 years ago
justinmchase|4 years ago
marcodiego|4 years ago
rognjen|4 years ago
_jezell_|4 years ago
ing33k|4 years ago
erlich|4 years ago
wingspan|4 years ago
muglug|4 years ago
As long as billions of people keep using Facebook they can maintain their own static analysis tooling for Javascript for as long as they want.
ctvo|4 years ago
enjikaka|4 years ago
[deleted]
da39a3ee|4 years ago
That seems like a very tractable machine learning problem, yet all I could find was a single python library which looks nice, but doesn't have much adoption, and requires installing the entirety of tensorflow despite the fact that users just want a trained model and a predict() function.
Why doesn't a popular library like this exist?
jamessb|4 years ago