After my experience with swift and the bone headed decisions made in lower level design items of the language to make it more 'ergonomic' at the expense of compile speed and debug-ability, I worry about chris latner making the same mistakes again.
Swift literally took 10x more time to compile than equivalent objective-c code for not much benefit when it first came out, along with a debugger that to this day is significantly slower, unreliable and is still buggier than the objective-c one. It also does not scale linearly in compile time speed when you throw more cores to it, unlike c, c++ & objective-c, and linking is pretty much solved now with the mold linker.
There are / were huge chokepoints in the compiling chain that caused many of these regressions, that when I learned the minor benefits they brought made me do a facepalm. So many literal expensive megayears of life wasted for so little benefit.
%80 of the benefits of swift that were cheap to implement could've been ported over something equivalent to objective-c for little to no compile time penalties with the new ergonomic language syntax, such as strong nullables and enum ADT types. To this day, it's you still have to codegen mocks. It's frustrating.
I would wait and see what benefits that mojo will actually bring, and I hope that chris and the team there has learned from their mistakes with swift, and chose compile speed over little features that many could live without.
I also hope they use this as an opportunity to solve pythons horrible package management issue and copy ideas from rust's cargo liberally.
I have literally the same experience about slow and overly complex type systems and too much sugar as you're pointing out. I've learned a lot from it, and the conclusion is "don't do it again". You can see a specific comment about this at the end of this section:
https://docs.modular.com/mojo/notebooks/HelloMojo.html#overl...
```
Mojo doesn’t support overloading solely on result type, and doesn’t use result type or contextual type information for type inference, keeping things simple, fast, and predictable. Mojo will never produce an “expression too complex” error, because its type-checker is simple and fast by definition.
```
It's also interesting that Rust et al made similar (but also different) mistakes and have compile time issues scaling. Mojo has a ton of core compiler improvements as well addressing the "LLVM is slow" sorts of issues that "zero abstraction" languages have when expecting LLVM to do all the work for them.
With every new langauge that comes out, sure it brings a few interesting features, but it always takes 5+ years for IDE support, a good debugger, and compile speed (if ever) to arrive.
Good debugging tooling and compiler speed probably outweighs all the other benefits in the long run. Rust is a good example. JavaScript transpiling too. And once it arrives, people will probably be onto the next new language that barely has IDE syntax highlighting ready.
> pythons horrible package management
It really is terrible. The import system is so convoluted, but I guess it's quite old.
Python really needs a good monorepo-friendly package manager like Node's pnpm.
Even though the ESM transition is a clusterfuck, the new module system is actually quite nice and straightforward if you don't need to touch older code.
Perhaps other system programming languages can learn from D language where the DMD, D's reference compiler, compile itself and the entire standard library in less than 5 seconds, with parallelism turned off [1].
[1] Ask HN: Why do you use Rust, when D is available?
Don’t C/C++ scale linearly with more cores because you just compile different compilands on the cores? Wouldn’t that be the same for Swift? Note, I’ve never used Swift.
Jeremy here. Thank you for sharing my article on HN! I've used more programming languages than I can count since I started coding 40 years ago, and it's not often I've gotten really excited about a new language. So the fact I've written this should tell you how special I think Mojo is.
It's early days of course - Mojo is still a prerelease, which means it's only available in an online playground and it's not open source yet (although that's planned). But for such a small team to create something like this so quickly gives me a lot of confidence for what's coming over the next few months.
Let me know if you have any questions or comments.
It seems like Julia already had a pretty good solution to the two language problem as well as being designed from the ground up for numerical computation (Python has numpy, but it seems bolted on and clunky by comparison). Yes, there are issues with large executables (large runtime), non-optimal gpu kernels, etc, but it seems like many of Julia's nagging issues could have been fairly easily solved if it had received as much investment as some of the alternatives have received - Modular appears to have a lot of funding to develop Mojo, and frameworks like PyTorch and TensorFlow have the backing of mega tech corporations. At one point Swift for TensorFlow was going to be the next big ML language and was being funded and developed by Google.
I'm not sure there's a question here... more of a frustration that Julia seems to be a very good solution already (and is more mature than Mojo) and yet it seems to get passed over because companies decide to fund some new(er) shiny thing instead.
BTW: I think your matrix multiplication demo could achieve very similar performance (including the SIMD, parallelization and LoopVectorization) in Julia.
Hi Jeremy, thanks for sharing this, hope it works out well. Also, thanks for doing the past work on Dawnbench&etc that y'all did. Really helped push an era of speedy deep learning to the forefront and helped stoke the flames of my passion for this particular subfield of research. FastAI really put out a ton of good work that inspired and helped me a lot. I'm generally stuck on the linear version of OneCycle, but I've referred a lot to y'all and the community over the years when building out my toolbox.
In any case, you have more years under your belt than me in the software field, so I'm sure you have some of the standard skepticisms too, hopefully it works out really well. Will be keeping an eye on this software, it seems promising. Esp as I tend towards the more esoteric methods that tend to break the existing deep learning tooling out there.
Thanks, Jeremy. It does sound exciting. It reminds me a bit of what Blaze hoped to offer (a unified interface but to diverse data storage instead of processing systems) but never came to fruition. And folks have been talking about needing an ML/AI-centric language that could feel more natural and expressive than the abstractions that Tensorflow, PyTorch, and Jax provide. Maybe Mojo is it.
To get full performance though, you can't write just Python. As you show in your demo, you have to add verbose typedefs, structs, additional fn/defs, SIMD calls, vectorization hooks, loop-unrolling, autotune insertions, etc.
While great, this adds mental overhead and clutters an otherwise concise and elegant syntax. Do you think this syntactic molasses
will become second nature to developers? Will IDE tools make writing it easier?
This is very exciting , I had seen you had mentioned about Numba and Cython but any comments on Pypy ? How about Interfacing Mojo LLVM+MLIR With PyPy? Would it make it more compatible with python ecosystem since 90% of python ecosystem works on PyPy.
I afraid i am not making any sense , but this is what i had been waiting for past 13 years since i touched python 2.5.
The two biggest things that have happened in programming for decades are:
1: StackOverflow
2: ChatGPT
Nothing else comes close. Certainly not any new language.
WHY these two things are the biggest developments in programming is because they help developers solve problems and learn new things from a global accumulation of programmer knowledge.
If there was a Nobel Prize for computing then both StackOverflow and ChatGPT would be deserving of it, or the people behind them anyway.
My bad - I meant to write "Mojo may be the biggest programming language advance in decades" but forgot to write "language". I've updated the title now. (Although it doesn't update the HN title unfortunately.)
I am actually super interested and positive about Mojo but I can't take the title seriously. Not even close. As is, it is basically a yet another new (completely distinct language from Python) that looks and feels kind of like Python and hopes to convert a bunch of users, but is not really Python, and embeds CPython in the same way that Swift (4TF) could already do (and a lot of random software also do to use Python as their scripting engine.)
I don't think you appreciate what has happened over the last several decades. I'm sure I'm missing a ton of things as well. Stack, overflow and chat GPT probably wouldn't even make my top five list over the last 3 decades. If you look 3 decades out, I'm sure there will be some AI stuff in there, but it is not even close to making the list right now IMO.
> 1. Mojo hasn’t actually achieved these things, and the snazzy demo hides disappointing real-life performance, or
> Neither of these things are true.
> The demo, in fact, was created in just a few days
Not to detract from this, I'd trust anything Chris does, but anyone who follows similar projects (Truffle Ruby comes to mind) knows that it's very easy to implement a tiny subset of a highly dynamic language and make it fast, it's insanely hard to scale it all the way up to the most dynamic of behaviors.
The lines above read to me as a bad sign that the author of the article isn't being very honest. It definitely is a snazzy demo hiding real life performance because one way to hide performance is to implement a very small subset and make it fast.
Indeed, you're totally right about that. That's the trick with Mojo, our goal is not to make dynamic python magically fast. Yes, we are quite a bit faster at dynamic code (because we have compiler instead of an interpreter) but that isn't by relying on a 'sufficiently smart' compiler to remove the dynamism, it is just because "compilers" instead of "interpreters".
The reason Mojo is way way faster than Python is because it give programmers control over static behavior and makes it super easy to adopt incrementally where it makes sense. The key payoff of this is that the compilation process is quite simple, there are no JITs required, you get predictable and controllable performance, and you still get dynamism where you ask for it.
Mojo doesn't aim to magically make Python code faster with no work (though it does accidentally do that a little bit), Mojo gives you control so you can care about performance where you want to. 80/20 rule and all that.
Mojo also has a more advanced ownership system than Rust or Swift, which is also pretty cool if you're into such things. Check out the programmer's manual here for more info:
https://docs.modular.com/mojo/
Although that's true, TruffleRuby and GraalPy show that you can actually make highly dynamic languages extremely fast with a powerful enough JITC, which Graal/Truffle actually are. The hard part those projects are wrestling with is being 100% compatible with the entire ecosystem, warty native modules that abuse interpreter internals and all the rest. But they have the tech and manpower to do it.
So one of the key questions for Mojo is going to be this: how does it or will it compare to GraalPy as it matures, which should be able to run untyped Python at high speed, potentially even multi-threaded, with existing native extensions.
It can be that maybe the advantage is predictability, lack of warmup, that people in the AI space just prefer stuff made by a startup, that Mojo solves other pain points like packaging, all sorts of things. But it can also be that Python took off in ML exactly because it's untyped and people don't want to deal with the hassle of a type system if they don't need to.
The alternatives section of the blog post doesn't mention Graal/Truffle at all, suggesting that this is a bit of a blind spot for them. But these are well funded projects that have already yielded many-orders-of-magnitude speedups for existing code. It's not necessarily a good idea to write them off.
"The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo!
This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself."
I've always wanted to create a new language (probably visual) where the implementation of that language is open to be extended and changed right in the same context as where you use that language. A readily-accessible trap down to the next lower level of abstraction. I suspect this leads to more "expressivity" while being less code overall.
Sounds like Julia (as mentioned) as proven this is true and perhaps Mojo too.
> I've always wanted to create a new language (probably visual) where the implementation of that language is open to be extended and changed right in the same context as where you use that language. A readily-accessible trap down to the next lower level of abstraction. I suspect this leads to more "expressivity" while being less code overall.
It sounds like you would find Self [1] interesting (along with the visual programming environment).
I wish the article would also mention Nim, a Python-like language that compiles to native, and compare it to Mojo.
Also, this isn't quite correct:
> There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python.
Compiling Go with gccgo instead of the Go compiler, and with the right flags, results in a much smaller executable.
It is also possible to further compress the executable with upx.
This command should result in executables around 0.5 to 1.5 MiB in size, on Linux (depending on the size of the Go project):
> I wish the article would also mention Nim, a Python-like language that compiles to native, and compare it to Mojo.
I would not say that Nim is “a Python-like language”. The only significant similarity is in the usage of indentation instead of curly brackets. To me, it is much more similar to Pascal/Ada/Oberon, but less verbose.
The title is obviously too hyperbolic, as the article itself doesn't even try to back the claim. It's just a good introduction piece. (The official doc is a little bit too scattered to read through.)
Although I like the basic concept, it feels awkward that the engine is bound to the language, which by itself is also bound to a specific domain. Unless Mojo becomes fully compatible with CPython, the language will be just another DSL, which won't get adopted widely. Worse, MLOps is already a huge mess, so I can't imagine migrating to a language with limited usages.
It's interesting to read but I'm not very convinced by the added value compared to using Cython, building modules in fortran or C, or using interop with rust or nim.
Especially I think the need to manage memory manually and not be able to use a GC (or some other automatic memory managment) is a mistake. While I understand that you may need to manage the memory manually to squeeze the maximum performances in some cases, the performance degradation shown in benchmark by D, nim, Julia or C# shows that you can have a very good compromise between speed and easyness with autmatic memory management
I agree that Jeremy was excited about Swift being a possible option for solving the "two languages" problem, but to be fair there weren't many options for ML programmers at the time.
Also, I think "Swift for Tensorflow" was doomed from the start. Swift doesn't really exist outside of Apple's ecosystem, and it was competing with Python which is everywhere, and was a top 5 language even then.
Making a superset of Python is a winning strategy because people can try it out and walk away if they don't like it without much effort. It worked for C-with-classes (which became C++) and Typescript, even though it took a little while for them to gain popularity and mass adoption.
I understand that there might be some benefit in developing Mojo in a small group. But I really wish there were some source code to compile, or at least binaries to execute. Currently it is locked behind a wait-list that might give access to executing code on a remote server... But if you make these big claims about hardware acceleration and performance, I really wish I could verify those claims on my own hardware.
Really cool project, looking up to tinkering with some actual releases soon. I know the names behind this project are highly credible, but I'm reluctant to invest much brainpower into something that might be locked up to cloud usage.
We're not in a rush here, we'd rather do things right than do things fast. Mojo is currently one day old :-)
We are deliberately rolling things out in a slow way to learn as we go. We're also eager for it to go far and wide, but think it makes sense to be a bit deliberate about that. When we launched Swift 1.0 it was more like a 0.5 and that started things off on the wrong foot. Mojo is still early in development, not ready for widespread production use.
It appears to be just another language. I don't see what's different about it. Anyway, it's not the languages themselves that make them big, it's the ecosystem around it that makes it big.
So is this more like Matlab or more like Python, in the legal and organizational ways? I don't know the right words for it so I'm not going to say "free" or "open source" but maybe you know what I mean.
I think this is a great analogy that will get more developers to make the connection. I've read all the "why" and all but this phrasing really sinks in for me.
Jeremy has a lot of cache with me with his amazing work in education and his excitement on this is a great sign in Mojo's favor. Solving the two language problem would be pretty amazing.
[+] [-] novok|2 years ago|reply
Swift literally took 10x more time to compile than equivalent objective-c code for not much benefit when it first came out, along with a debugger that to this day is significantly slower, unreliable and is still buggier than the objective-c one. It also does not scale linearly in compile time speed when you throw more cores to it, unlike c, c++ & objective-c, and linking is pretty much solved now with the mold linker.
There are / were huge chokepoints in the compiling chain that caused many of these regressions, that when I learned the minor benefits they brought made me do a facepalm. So many literal expensive megayears of life wasted for so little benefit.
%80 of the benefits of swift that were cheap to implement could've been ported over something equivalent to objective-c for little to no compile time penalties with the new ergonomic language syntax, such as strong nullables and enum ADT types. To this day, it's you still have to codegen mocks. It's frustrating.
I would wait and see what benefits that mojo will actually bring, and I hope that chris and the team there has learned from their mistakes with swift, and chose compile speed over little features that many could live without.
I also hope they use this as an opportunity to solve pythons horrible package management issue and copy ideas from rust's cargo liberally.
[+] [-] chrislattner|2 years ago|reply
``` Mojo doesn’t support overloading solely on result type, and doesn’t use result type or contextual type information for type inference, keeping things simple, fast, and predictable. Mojo will never produce an “expression too complex” error, because its type-checker is simple and fast by definition. ```
It's also interesting that Rust et al made similar (but also different) mistakes and have compile time issues scaling. Mojo has a ton of core compiler improvements as well addressing the "LLVM is slow" sorts of issues that "zero abstraction" languages have when expecting LLVM to do all the work for them.
-Chris Lattner @ Modular
[+] [-] vaughan|2 years ago|reply
I wish people would prioritize this more.
With every new langauge that comes out, sure it brings a few interesting features, but it always takes 5+ years for IDE support, a good debugger, and compile speed (if ever) to arrive.
Good debugging tooling and compiler speed probably outweighs all the other benefits in the long run. Rust is a good example. JavaScript transpiling too. And once it arrives, people will probably be onto the next new language that barely has IDE syntax highlighting ready.
> pythons horrible package management
It really is terrible. The import system is so convoluted, but I guess it's quite old.
Python really needs a good monorepo-friendly package manager like Node's pnpm.
Even though the ESM transition is a clusterfuck, the new module system is actually quite nice and straightforward if you don't need to touch older code.
[+] [-] teleforce|2 years ago|reply
[1] Ask HN: Why do you use Rust, when D is available?
https://news.ycombinator.com/item?id=23494490
[+] [-] FpUser|2 years ago|reply
To me reduced "debug-ability" is one of the primary reasons to ignore any language however "beautiful" it is
[+] [-] Koshkin|2 years ago|reply
Same with C++ vs. C
[+] [-] kenjackson|2 years ago|reply
[+] [-] jph00|2 years ago|reply
It's early days of course - Mojo is still a prerelease, which means it's only available in an online playground and it's not open source yet (although that's planned). But for such a small team to create something like this so quickly gives me a lot of confidence for what's coming over the next few months.
Let me know if you have any questions or comments.
[+] [-] UncleOxidant|2 years ago|reply
I'm not sure there's a question here... more of a frustration that Julia seems to be a very good solution already (and is more mature than Mojo) and yet it seems to get passed over because companies decide to fund some new(er) shiny thing instead.
BTW: I think your matrix multiplication demo could achieve very similar performance (including the SIMD, parallelization and LoopVectorization) in Julia.
[+] [-] tysam_and|2 years ago|reply
In any case, you have more years under your belt than me in the software field, so I'm sure you have some of the standard skepticisms too, hopefully it works out really well. Will be keeping an eye on this software, it seems promising. Esp as I tend towards the more esoteric methods that tend to break the existing deep learning tooling out there.
[+] [-] hcrisp|2 years ago|reply
To get full performance though, you can't write just Python. As you show in your demo, you have to add verbose typedefs, structs, additional fn/defs, SIMD calls, vectorization hooks, loop-unrolling, autotune insertions, etc.
While great, this adds mental overhead and clutters an otherwise concise and elegant syntax. Do you think this syntactic molasses will become second nature to developers? Will IDE tools make writing it easier?
[+] [-] fanf2|2 years ago|reply
Why is the & (reference) argument convention a suffix? That seems inconsistent with Python’s *args and **kwargs, and the keyword conventions.
What is the difference between the “owned” and “consuming” conventions?
And why is the ^ (consume) operator postfix? Python doesn’t have any postfix operators, and it seems awkward wrt the ^ infix xor operator.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] v3ss0n|2 years ago|reply
[+] [-] flufferstutter|2 years ago|reply
[deleted]
[+] [-] andrewstuart|2 years ago|reply
1: StackOverflow
2: ChatGPT
Nothing else comes close. Certainly not any new language.
WHY these two things are the biggest developments in programming is because they help developers solve problems and learn new things from a global accumulation of programmer knowledge.
If there was a Nobel Prize for computing then both StackOverflow and ChatGPT would be deserving of it, or the people behind them anyway.
[+] [-] jph00|2 years ago|reply
[+] [-] tgma|2 years ago|reply
I am actually super interested and positive about Mojo but I can't take the title seriously. Not even close. As is, it is basically a yet another new (completely distinct language from Python) that looks and feels kind of like Python and hopes to convert a bunch of users, but is not really Python, and embeds CPython in the same way that Swift (4TF) could already do (and a lot of random software also do to use Python as their scripting engine.)
[+] [-] adverbly|2 years ago|reply
GitHub?
The cloud?
GPUs? Multi core CPUs?
Subscription models?
SPAs? Ajax?
VC funding?
Mobile phones?
I don't think you appreciate what has happened over the last several decades. I'm sure I'm missing a ton of things as well. Stack, overflow and chat GPT probably wouldn't even make my top five list over the last 3 decades. If you look 3 decades out, I'm sure there will be some AI stuff in there, but it is not even close to making the list right now IMO.
[+] [-] rospaya|2 years ago|reply
[+] [-] pharmakom|2 years ago|reply
And stackoverflow is number 1 :)
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] jayd16|2 years ago|reply
[+] [-] echelon|2 years ago|reply
An incremental improvement with an enormous impact.
[+] [-] Alifatisk|2 years ago|reply
4. React
5. Rust
[+] [-] Conscat|2 years ago|reply
[+] [-] nwienert|2 years ago|reply
> Neither of these things are true.
> The demo, in fact, was created in just a few days
Not to detract from this, I'd trust anything Chris does, but anyone who follows similar projects (Truffle Ruby comes to mind) knows that it's very easy to implement a tiny subset of a highly dynamic language and make it fast, it's insanely hard to scale it all the way up to the most dynamic of behaviors.
The lines above read to me as a bad sign that the author of the article isn't being very honest. It definitely is a snazzy demo hiding real life performance because one way to hide performance is to implement a very small subset and make it fast.
[+] [-] chrislattner|2 years ago|reply
The reason Mojo is way way faster than Python is because it give programmers control over static behavior and makes it super easy to adopt incrementally where it makes sense. The key payoff of this is that the compilation process is quite simple, there are no JITs required, you get predictable and controllable performance, and you still get dynamism where you ask for it.
Mojo doesn't aim to magically make Python code faster with no work (though it does accidentally do that a little bit), Mojo gives you control so you can care about performance where you want to. 80/20 rule and all that.
Mojo also has a more advanced ownership system than Rust or Swift, which is also pretty cool if you're into such things. Check out the programmer's manual here for more info: https://docs.modular.com/mojo/
-Chris Lattner @ Modular
[+] [-] mike_hearn|2 years ago|reply
So one of the key questions for Mojo is going to be this: how does it or will it compare to GraalPy as it matures, which should be able to run untyped Python at high speed, potentially even multi-threaded, with existing native extensions.
It can be that maybe the advantage is predictability, lack of warmup, that people in the AI space just prefer stuff made by a startup, that Mojo solves other pain points like packaging, all sorts of things. But it can also be that Python took off in ML exactly because it's untyped and people don't want to deal with the hassle of a type system if they don't need to.
The alternatives section of the blog post doesn't mention Graal/Truffle at all, suggesting that this is a bit of a blind spot for them. But these are well funded projects that have already yielded many-orders-of-magnitude speedups for existing code. It's not necessarily a good idea to write them off.
[+] [-] nielsbot|2 years ago|reply
"The next step was to create a minimal Pythonic way to call MLIR directly. That wasn’t a big job at all, but it was all that was needed to then create all of Mojo on top of that – and work directly in Mojo for everything else. That meant that the Mojo devs were able to “dog-food” Mojo when writing Mojo, nearly from the very start. Any time they found something didn’t quite work great as they developed Mojo, they could add a needed feature to Mojo itself to make it easier for them to develop the next bit of Mojo!
This is very similar to Julia, which was developed on a minimal LISP-like core that provides the Julia language elements, which are then bound to basic LLVM operations. Nearly everything in Julia is built on top of that, using Julia itself."
I've always wanted to create a new language (probably visual) where the implementation of that language is open to be extended and changed right in the same context as where you use that language. A readily-accessible trap down to the next lower level of abstraction. I suspect this leads to more "expressivity" while being less code overall.
Sounds like Julia (as mentioned) as proven this is true and perhaps Mojo too.
[+] [-] amscanne|2 years ago|reply
It sounds like you would find Self [1] interesting (along with the visual programming environment).
[1] https://en.wikipedia.org/wiki/Self_(programming_language)
[+] [-] xyproto|2 years ago|reply
Also, this isn't quite correct:
> There is also the approach taken by Go, which isn’t able to generate small applications like C, but instead incorporates a “runtime” into each packaged application. This approach is a compromise between Python and C, still requiring tens of megabytes for a binary, but providing for easier deployment than Python.
Compiling Go with gccgo instead of the Go compiler, and with the right flags, results in a much smaller executable.
It is also possible to further compress the executable with upx.
This command should result in executables around 0.5 to 1.5 MiB in size, on Linux (depending on the size of the Go project):
[+] [-] ziotom78|2 years ago|reply
I would not say that Nim is “a Python-like language”. The only significant similarity is in the usage of indentation instead of curly brackets. To me, it is much more similar to Pascal/Ada/Oberon, but less verbose.
[+] [-] sublinear|2 years ago|reply
The article introduces it with a video instead of just getting to the point and showing code.
Yeah idk. Looks cool I guess.
[+] [-] zonkdefunk|2 years ago|reply
[+] [-] esjeon|2 years ago|reply
Although I like the basic concept, it feels awkward that the engine is bound to the language, which by itself is also bound to a specific domain. Unless Mojo becomes fully compatible with CPython, the language will be just another DSL, which won't get adopted widely. Worse, MLOps is already a huge mess, so I can't imagine migrating to a language with limited usages.
[+] [-] poulpy123|2 years ago|reply
Especially I think the need to manage memory manually and not be able to use a GC (or some other automatic memory managment) is a mistake. While I understand that you may need to manage the memory manually to squeeze the maximum performances in some cases, the performance degradation shown in benchmark by D, nim, Julia or C# shows that you can have a very good compromise between speed and easyness with autmatic memory management
[+] [-] pjmlp|2 years ago|reply
[+] [-] zengid|2 years ago|reply
Also, I think "Swift for Tensorflow" was doomed from the start. Swift doesn't really exist outside of Apple's ecosystem, and it was competing with Python which is everywhere, and was a top 5 language even then.
Making a superset of Python is a winning strategy because people can try it out and walk away if they don't like it without much effort. It worked for C-with-classes (which became C++) and Typescript, even though it took a little while for them to gain popularity and mass adoption.
[+] [-] sva_|2 years ago|reply
Really cool project, looking up to tinkering with some actual releases soon. I know the names behind this project are highly credible, but I'm reluctant to invest much brainpower into something that might be locked up to cloud usage.
[+] [-] chrislattner|2 years ago|reply
We are deliberately rolling things out in a slow way to learn as we go. We're also eager for it to go far and wide, but think it makes sense to be a bit deliberate about that. When we launched Swift 1.0 it was more like a 0.5 and that started things off on the wrong foot. Mojo is still early in development, not ready for widespread production use.
-Chris Lattner @ Modular
[+] [-] faitswulff|2 years ago|reply
[+] [-] ianpurton|2 years ago|reply
Or does Mojo sit on top of iree?
Will the inference engine support multi GPU setups?
[+] [-] cjbprime|2 years ago|reply
[+] [-] latenightcoding|2 years ago|reply
[+] [-] anylang|2 years ago|reply
[+] [-] MattRix|2 years ago|reply
[+] [-] ftxbro|2 years ago|reply
[+] [-] fuzzythinker|2 years ago|reply
I think this is a great analogy that will get more developers to make the connection. I've read all the "why" and all but this phrasing really sinks in for me.
[+] [-] owkman|2 years ago|reply