For years I've kept a list of apps / ideas / products I may do someday. I never made the time, with Cursor AI I have already built one, and am working on another. It's enabling me to use frameworks I barely know, like React Native, Swift, etc..
The first prompt (with o1) will get you 60% there, but then you have a different workflow. The prompts can get to a local minimum, where claude/gpt4/etc.. just can't do any better. At which point you need to climb back out and try a different approach.
I recommend git branches to keep track of this. Keep a good working copy in main, and anytime you want to add a feature, make a branch. If you get it almost there, make another branch in case it goes sideways. The biggest issue with developing like this is that you are not a coder anymore; you are a puppet master of a very smart and sometimes totally confused brain.
> For years I've kept a list of apps / ideas / products I may do someday. I never made the time, with Cursor AI I have already built one, and am working on another.
This is one fact that people seem to severely under-appreciate about LLMs.
They're significantly worse at coding in many aspects than even a moderately skilled and motivated intern, but for my hobby projects, until now I haven't had any intern that would even as much as taking a stab at some of the repetitive or just not very interesting subtasks, let alone stick with them over and over again without getting tired of it.
If you have the budget, I have also taken a liking to perplexity.ai. I got it free from my school and it basically aggregates searches for me with sources (but be sure to check them since sometimes it reads between the links so to speak). It basically does the Google searching for me and have returned more up to date API info than Claude nor ChatGPT knew about. Then I would let Claude or ChatGPT know about it by copying doc and source code to work from.
That's literally going through the dark maze blindfolded, just bouncing off the walls randomly and hoping you are generally at least moving to your goal.
If software engineering should look like this, oh boy am I happy to be retiring in mere 17 years (fingers crossed) and not having to spend more time in such work. No way any quality complex code can come up from such approach, and people complain about quality of software now .
> The first prompt (with o1) will get you 60% there, but then you have a different workflow. The prompts can get to a local minimum, where claude/gpt4/etc.. just can't do any better. At which point you need to climb back out and try a different approach.
So you're basically bruteforcing development, a famously efficient technique for... anything.
What a feat! There's at least 3 pages of google search results for the nearly same thing. The "prompt" I used in google.com is:
site:github.com map comparison
I guess the difference, is that my way uses dramatically less time and resources, but requires directly acknowledging the original coders instead of relying on the plagiarism-ish capabilities of reguritating something through an LLM.
But creating things for which there are many existing, documented examples is what LLMs do best. Without this use case it's almost like they don't provide any value at all.
I have about 6 months of coding experience. All I really knew was how to build a basic MERN app
I’ve been using Sonnet 3.5 to code and I’ve managed to build multiple full fledged apps, including paid ones
Maybe they’re not perfect, but they work and I’ve had no complaints yet. They might not scale to become the next Facebook, but not everything has to scale
I learned to drive before in-car GPS was widely available, at least where I lived.
Going to some new place meant getting a map, looking at it, making a plan, following the plan, keeping track on the map, that sort of thing.
Then I traveled somewhere new, for the first time, with GPS and a navigation sofware. It was quite impressive, and rather easier. I got to my destination the first time, without any problems. And each time after that.
But I did remark that I did not learn the route. The 10th time, the 50th time, I still needed the GPS to guide me. And without it, I would have to start the whole thing from scratch: get a map, make a plan, and so on.
Having done the "manual" navigation with maps lots of times before, it never worries me what I would do without a GPS. But if you're "born" with the GPS, I wonder what you do when it fails.
Are you not worried how you would manage your apps if for some reason the AIs were unavailable?
We’ll see what the future holds but as an old timer, using LLMs to creat applications seems exactly the same as:
Python/JS and their ecosystem replacing OS hosted C/C++ which replaced bare metal Assembly which replaced digital logic which replaced analog circuits which replaced mechanical design as the “standard goto tool” for how to create programs.
Starting with punchcard looms and Ada Lovelace maybe.
In every case we trade resource efficiency and lower level understanding for developer velocity and raise the upper bound on system complexity, capability, and somehow performance (despite the wasted efficiency).
Every time I see claims like this, I instinctively click on the user's profile and try to verify if their story checks out.
>I played around a lot with code when I was younger. I built my first site when I was 13 and had a good handle on Javascript back when jQuery was still a pipe dream.
>Started with the Codecademy Ruby track which was pretty easy. Working through RailsTutorial right now.
>I've been freelancing since I was 17. I've dabbled in every kind of online trade imaginable, from domain names to crypto. I've built and sold multiple websites. I also built and sold a small agency.
>I can do some marketing, some coding, some design, some sales, but I'm not particularly good at any of those in isolation.
So I don't really understand where this claim of only "6 months of coding experience" is coming from, when you clearly have been coding on and off for multiple decades.
What do you do if your app has a bug that your LLM isn't able to fix? is your coding experience enough to fix it, or do you ship with bugs hoping customers won't mind?
Genuine question: Do you feel like you're learning the language/frameworks/techniques well? Or do you feel like you're just getting more adept at leveraging the LLM?
Do you think you could you maintain and/or debug someone else's application?
I think the front end is the most interesting place right now, because it’s where people are making stuff for themselves with the help of LLMs.
The browser is a great place to build voice chat, 3d, almost any other experience. I expect a renewed interest in granting fuller capabilities to the web, especially background processing and network access.
I'm not saying you're wrong at all or in disbelief -- but I've spent lots of time with Claude 3.5 trying to prototype React apps and not even full fledged prototypes -- and I can't get it to make anything bug free somehow.
Maybe I'm "holding it wrong" -- I mean using it incorrectly.
True it renders quite interesting mockups and has React code behind it -- but then try and get this into even a demoable state for your boss or colleagues...
Even a simple "please create a docker file with everything I need in a directory to get this up and running"...doesn't work.
Docker file doesnt work (my fault maybe for not expressing I'm on Arm64), app is miss configured, files are in the wrong directories, key things are missing.
Again just my experience.
I find Claude interesting for generating ideas-- but I have a hard time seeing how a dev with six months experience could get multiple "paid" apps out with it. I have 20 years (bla, bla) experience and still find it requires outrageous hand holding for anything serious.
Again I'm not doubting you at all -- I'm just saying me personally I find it hard to be THAT productive with it.
Claude is fantastic. I think the model itself is good enough to be able to write good software when competently directed; it's let down only by the UI/UX around it.
My only complaints are:
a) that it's really easy to hit the usage limit, especially when refactoring across a half dozen files. One thing that'd theoretically be easyish to fix would be automatically updating files in the project context (perhaps with an "accept"/"reject" prompt) so that the model knows what the latest version of your code is without having to reupload it constantly.
b) it oscillating between being lazy in really annoying ways (giving largeish code blocks with commented omissions partway through) and supplying the full file unnecessarily and using up your usage credits.
My hope is that Jetbrains give up on their own (pretty limited) LLM and partner with Anthropic to produce a super-tight IDE native integration.
I wanted to develop a simple tool to compare maps. I thought about using this opportunity to try out Claude AI for coding a project from scratch. It worked surprisingly well!
At least 95% of the code was generated by AI (I reached the limit so had to add final bits on my own).
I asked Claude AI to make me an app and it refused and called it dangerous. I asked what kind of apps they could build and they suggested social media or health. So I asked it to make one but it refused too dangerous. I asked it to make anything.. anything app and it refused. I told it it sucked and it said it didn't. Then I deleted my account.
I think we're going to see a similar backlash to AI apps as we did with AI art.
Not necessarily because users can identify AI apps, but more because due to the lower barrier of entry - the space is going to get hyper-competitive and it'll be VERY difficult to distinguish your app from the hundreds of nearly identical other ones.
Another thing that worries me (because software devs in particular seem to take a very loose moral approach to plagiarism and basic human decency) is that it'll be significantly easier for a less scrupulous dev to find an app that they like, and use an LLM to instantly spin up a copy of it.
I'm trying not to be all gloom and doom about GenAI, because it can be really nifty to see it generate a bunch of boilerplate (YAML configs, dev opsy stuff, etc.) but sometimes it's hard....
I hope not. I commend that software devs in particular seem to be adaptable to new technologies instead of trying to stop progress.
Take this very post for example. Imagine an artist forum having daily front-page articles on AI, and most of the comments are curious and non-negative. That's basically what HackerNews is doing, but with developers instead. The huge culture difference is curious, and makes me happy with the posters on this site.
You attribute it to the difficulty of using AI coding tools. But such tools to cut out the programmer and make it available to the layman has always existed: libraries, game engines, website builders, and now web app builders. You also attribute it to the flooding of the markets. But the website and mobile markets are famously saturated, and yet there we continue making stuff, because we want to (and because quality things make more money).
I instead attribute it to our culture of free sharing (what one might call "plagiarism"... of ideas?!), adaptability, and curiosity. And that makes me hopeful.
No doubt about it, things will get very competitive in the software space and while anyone will be able to use generative AI tools, I think more will be expected for less.
> I am looking forward to this type of real time app creation being added into our OSs, browsers, phones and glasses.
What do you see that being used for?
Surely, polished apps written for others are going to be best built in professional tools that live independently of whatever the OS might offer.
So I assume you're talking about quick little scratch apps for personal use? Like an AI-enriched version of Apple's Automator or Shortcuts, or of shell scripts, where you spend a while coahcing an AI to write the little one-off program you need instead of visually building a worrkflow or writing a simple script? Is that something you believe there's a high unmet need for?
This is an earnest question. I'm sincerely curious what you're envisioning and how it might supercede the rich variety of existing tools that seem to only see niche use today.
Why multi-pass? So multiple semantic errors can be reported at once to the user!
The most important factor here is that I've written lexers and parsers beforehand. I was very detailed in my instructions and put it together piece-by-piece. It took probably 100 or so different chats.
Try it out with the GUI you see in the gif in the README:
git clone git@github.com:williamcotton/search-input-query.git
cd search-input-query/search-input-query-demo
npm install
npm run dev
Click the "Share" button in the upper right corner of your chat.
Click the "Share & Copy Link" button to create a shareable link and add the chat snapshot to your project’s activity feed.
/edit: i just checked. i think they had a regression? or at least i cannot see the button anymore. go figure. must be pretty recently, as i shared a chat just ~2-3 weeks ago
I used Claude AI project to attach requirement for the project. Then I just went with single conversation. I specified that I want to do it in small steps and then was just doing copy -> paste until I reached the limit. I think it was because I was doing one big convo instead attaching code to the project.
So pretty simple flow, totally not scalable for bigger projects.
I need to read and check Cursor AI which can also use Claude models.
you can use the vscode cline to give a task and it uses a LLM to go out and create the app for you.
In django i had it create a backend, set admin user, create requirements.txt and then do a whole frontend in vue as a test. It even can do screen testing and tested what happens if it puts a wrong login in.
Is Claude 'better' than o1-preview? I've had phenomenal results with o1-preview (switching to o1-mini for simpler asks to avoid running out of queries), and tried Claude once and wasn't super impressed. Wondering if I should give it another shot.
Has somebody evaluated the pros and cons of giving developers a programming-specific AI tool like copilot versus a general-purpose AI tool like chatgpt or claude? We are a small shop so I would prefer to not pay for both for every developer.
Ideally, Claude should have told you about easier approaches. I don't see any reason to mess around with code.
There are plenty of website builder tools that will glue third party maps. Even the raw Google Maps API website will generate an HTML page with customized maps.
Next obvious steps: make it understand large existing programs, learn form the style of the existing code while avoiding to learn the bad style where it's present, and then contribute features or fixes to that codebase.
Claude has worked amazingly well for me as somebody really not into UI/web development.
There are so many small tasks that I could, but until now almost never would automate (whether it's not worth the time [1] or I just couldn't bring myself to do it as I don't really enjoy doing it). A one-off bitmask parser at work here, a proof of concept webapp at home there – it's literally opened up a new world of quality-of-life improvements, in a purely quantitative sense.
It extends beyond UI and web development too: Very often I find myself thinking that there must be a smarter way to use CLI tools like jq, zsh etc., but considering how rarely I use them and that I do already know an ineffective way of getting what I need, up until now I couldn't justify spending the hours of going through documentation on the moderately high chance of finding a few useful nuggets letting me shave off a minute here and there every month.
The same applies to SQL: After plateauing for several years (I get by just fine for my relatively narrow debugging and occasional data migration needs), LLMs have been much better at exposing me to new and useful patterns than dry and extensive documentation. (There are technical documents I really do enjoy reading, but SQL dialect specifications, often without any practical motivation as to when to use a given construct, are really not it.)
LLMs have generally been great at that, but being able to immediately run what they suggest in-browser is where Claude currently has the edge for me. (ChatGPT Plus can apparently evaluate Python, but that's server-side only and accordingly doesn't really allow interactive use cases.)
Can anyone measure in how Claude compares to copilot? Copilot feels like a fancy auto complete, but people seem to have good experiences with Claude, even in more complex settings.
This sort of thing will be interesting to me once it can be done with fully local and open source tech on attainable hardware (and no, a $5,000 MacBook Pro is not attainable). Building a dependence on yet another untrustworthy AI startup that will inevitably enshittify isn’t compelling despite what the tech can do.
We’re getting there with some of the smaller open source models, but we’re not quite there yet. I’m looking forward to where we’ll be in a year!
I like opensource and reproducible methods too. but here, the code was written by claude and then exported. Is that considered a dependency? They can find a different LLM or pay someone to improve/revise/extend the code later if necessary
The nice thing is it doesn't really matter all too much which you use "today", you can take the same inputs to any and the outputs remain complete forever. If the concern is you'll start using these tools, like them, start using them a lot, then are worried suddenly all hosted options to run a query disappear tomorrow (meaning being able to run local is important to you) then Qwen2.5-Coder 32B with a 4 bit quant will run 30+ tokens/second will give you many years of use for <$1k in hardware.
If you want to pay that <$1k up front to just say "it was always just on my machine, nobody elses" then more power to you. Most just prefer this "pay as you go for someone else to have set it up" model. That doesn't imply it's unattainable if you want to run it differently though.
> (and no, a $5,000 MacBook Pro is not attainable)
I know we all love dunking on how expensive Apple computers are, but for $5000 you would be getting a Mac Mini maxed-out with an M4 Pro chip with 14‑core CPU, 20‑core GPU, 16-core Neural Engine, 64GB unified RAM memory, an 8TB SSD and 10 Gigabit Ethernet.
thefourthchime|1 year ago
The first prompt (with o1) will get you 60% there, but then you have a different workflow. The prompts can get to a local minimum, where claude/gpt4/etc.. just can't do any better. At which point you need to climb back out and try a different approach.
I recommend git branches to keep track of this. Keep a good working copy in main, and anytime you want to add a feature, make a branch. If you get it almost there, make another branch in case it goes sideways. The biggest issue with developing like this is that you are not a coder anymore; you are a puppet master of a very smart and sometimes totally confused brain.
lxgr|1 year ago
This is one fact that people seem to severely under-appreciate about LLMs.
They're significantly worse at coding in many aspects than even a moderately skilled and motivated intern, but for my hobby projects, until now I haven't had any intern that would even as much as taking a stab at some of the repetitive or just not very interesting subtasks, let alone stick with them over and over again without getting tired of it.
psygn89|1 year ago
jajko|1 year ago
If software engineering should look like this, oh boy am I happy to be retiring in mere 17 years (fingers crossed) and not having to spend more time in such work. No way any quality complex code can come up from such approach, and people complain about quality of software now .
squigz|1 year ago
So you're basically bruteforcing development, a famously efficient technique for... anything.
elorant|1 year ago
CtrlAltmanDel|1 year ago
site:github.com map comparison
I guess the difference, is that my way uses dramatically less time and resources, but requires directly acknowledging the original coders instead of relying on the plagiarism-ish capabilities of reguritating something through an LLM.
mvdtnz|1 year ago
spaceman_2020|1 year ago
I’ve been using Sonnet 3.5 to code and I’ve managed to build multiple full fledged apps, including paid ones
Maybe they’re not perfect, but they work and I’ve had no complaints yet. They might not scale to become the next Facebook, but not everything has to scale
lucianbr|1 year ago
Going to some new place meant getting a map, looking at it, making a plan, following the plan, keeping track on the map, that sort of thing.
Then I traveled somewhere new, for the first time, with GPS and a navigation sofware. It was quite impressive, and rather easier. I got to my destination the first time, without any problems. And each time after that.
But I did remark that I did not learn the route. The 10th time, the 50th time, I still needed the GPS to guide me. And without it, I would have to start the whole thing from scratch: get a map, make a plan, and so on.
Having done the "manual" navigation with maps lots of times before, it never worries me what I would do without a GPS. But if you're "born" with the GPS, I wonder what you do when it fails.
Are you not worried how you would manage your apps if for some reason the AIs were unavailable?
poslathian|1 year ago
Python/JS and their ecosystem replacing OS hosted C/C++ which replaced bare metal Assembly which replaced digital logic which replaced analog circuits which replaced mechanical design as the “standard goto tool” for how to create programs.
Starting with punchcard looms and Ada Lovelace maybe.
In every case we trade resource efficiency and lower level understanding for developer velocity and raise the upper bound on system complexity, capability, and somehow performance (despite the wasted efficiency).
rlty_chck|1 year ago
>I played around a lot with code when I was younger. I built my first site when I was 13 and had a good handle on Javascript back when jQuery was still a pipe dream.
>Started with the Codecademy Ruby track which was pretty easy. Working through RailsTutorial right now.
posted on April 15, 2015, https://news.ycombinator.com/item?id=9382537
>I've been freelancing since I was 17. I've dabbled in every kind of online trade imaginable, from domain names to crypto. I've built and sold multiple websites. I also built and sold a small agency.
>I can do some marketing, some coding, some design, some sales, but I'm not particularly good at any of those in isolation.
posted on Jan 20, 2023, https://news.ycombinator.com/item?id=34459482
So I don't really understand where this claim of only "6 months of coding experience" is coming from, when you clearly have been coding on and off for multiple decades.
yodsanklai|1 year ago
hipadev23|1 year ago
Do you think you could you maintain and/or debug someone else's application?
jchanimal|1 year ago
The browser is a great place to build voice chat, 3d, almost any other experience. I expect a renewed interest in granting fuller capabilities to the web, especially background processing and network access.
dartos|1 year ago
njtransit|1 year ago
lostemptations5|1 year ago
Maybe I'm "holding it wrong" -- I mean using it incorrectly.
True it renders quite interesting mockups and has React code behind it -- but then try and get this into even a demoable state for your boss or colleagues...
Even a simple "please create a docker file with everything I need in a directory to get this up and running"...doesn't work.
Docker file doesnt work (my fault maybe for not expressing I'm on Arm64), app is miss configured, files are in the wrong directories, key things are missing.
Again just my experience.
I find Claude interesting for generating ideas-- but I have a hard time seeing how a dev with six months experience could get multiple "paid" apps out with it. I have 20 years (bla, bla) experience and still find it requires outrageous hand holding for anything serious.
Again I'm not doubting you at all -- I'm just saying me personally I find it hard to be THAT productive with it.
belter|1 year ago
smallerfish|1 year ago
My only complaints are:
a) that it's really easy to hit the usage limit, especially when refactoring across a half dozen files. One thing that'd theoretically be easyish to fix would be automatically updating files in the project context (perhaps with an "accept"/"reject" prompt) so that the model knows what the latest version of your code is without having to reupload it constantly.
b) it oscillating between being lazy in really annoying ways (giving largeish code blocks with commented omissions partway through) and supplying the full file unnecessarily and using up your usage credits.
My hope is that Jetbrains give up on their own (pretty limited) LLM and partner with Anthropic to produce a super-tight IDE native integration.
caspg|1 year ago
At least 95% of the code was generated by AI (I reached the limit so had to add final bits on my own).
MrMcCall|1 year ago
ipaddr|1 year ago
I can't think of a worse llm than Claude.
vunderba|1 year ago
Not necessarily because users can identify AI apps, but more because due to the lower barrier of entry - the space is going to get hyper-competitive and it'll be VERY difficult to distinguish your app from the hundreds of nearly identical other ones.
Another thing that worries me (because software devs in particular seem to take a very loose moral approach to plagiarism and basic human decency) is that it'll be significantly easier for a less scrupulous dev to find an app that they like, and use an LLM to instantly spin up a copy of it.
I'm trying not to be all gloom and doom about GenAI, because it can be really nifty to see it generate a bunch of boilerplate (YAML configs, dev opsy stuff, etc.) but sometimes it's hard....
CaptainFever|1 year ago
Take this very post for example. Imagine an artist forum having daily front-page articles on AI, and most of the comments are curious and non-negative. That's basically what HackerNews is doing, but with developers instead. The huge culture difference is curious, and makes me happy with the posters on this site.
You attribute it to the difficulty of using AI coding tools. But such tools to cut out the programmer and make it available to the layman has always existed: libraries, game engines, website builders, and now web app builders. You also attribute it to the flooding of the markets. But the website and mobile markets are famously saturated, and yet there we continue making stuff, because we want to (and because quality things make more money).
I instead attribute it to our culture of free sharing (what one might call "plagiarism"... of ideas?!), adaptability, and curiosity. And that makes me hopeful.
grugagag|1 year ago
2024user|1 year ago
I am looking forward to this type of real time app creation being added into our OSs, browsers, phones and glasses.
swatcoder|1 year ago
What do you see that being used for?
Surely, polished apps written for others are going to be best built in professional tools that live independently of whatever the OS might offer.
So I assume you're talking about quick little scratch apps for personal use? Like an AI-enriched version of Apple's Automator or Shortcuts, or of shell scripts, where you spend a while coahcing an AI to write the little one-off program you need instead of visually building a worrkflow or writing a simple script? Is that something you believe there's a high unmet need for?
This is an earnest question. I'm sincerely curious what you're envisioning and how it might supercede the rich variety of existing tools that seem to only see niche use today.
croes|1 year ago
williamcotton|1 year ago
https://github.com/williamcotton/search-input-query
Why multi-pass? So multiple semantic errors can be reported at once to the user!
The most important factor here is that I've written lexers and parsers beforehand. I was very detailed in my instructions and put it together piece-by-piece. It took probably 100 or so different chats.
Try it out with the GUI you see in the gif in the README:
yieldcrv|1 year ago
ffsm8|1 year ago
its even documented on their site
https://support.anthropic.com/en/articles/9519189-project-vi...
/edit: i just checked. i think they had a regression? or at least i cannot see the button anymore. go figure. must be pretty recently, as i shared a chat just ~2-3 weeks agoEcommerceFlow|1 year ago
Started off with having it create funny random stories, to slowly creating more and more advanced programs.
It’s shocking how good 3.5 Sonnet is at coding, considering the size of the model.
GaggiX|1 year ago
We don't know the size of Claude 3.5 Sonnet or any other Anthropic model.
truckerbill|1 year ago
caspg|1 year ago
So pretty simple flow, totally not scalable for bigger projects.
I need to read and check Cursor AI which can also use Claude models.
hijinks|1 year ago
In django i had it create a backend, set admin user, create requirements.txt and then do a whole frontend in vue as a test. It even can do screen testing and tested what happens if it puts a wrong login in.
wayeq|1 year ago
glonq|1 year ago
nitwit005|1 year ago
There are plenty of website builder tools that will glue third party maps. Even the raw Google Maps API website will generate an HTML page with customized maps.
nine_k|1 year ago
Next obvious steps: make it understand large existing programs, learn form the style of the existing code while avoiding to learn the bad style where it's present, and then contribute features or fixes to that codebase.
lxgr|1 year ago
There are so many small tasks that I could, but until now almost never would automate (whether it's not worth the time [1] or I just couldn't bring myself to do it as I don't really enjoy doing it). A one-off bitmask parser at work here, a proof of concept webapp at home there – it's literally opened up a new world of quality-of-life improvements, in a purely quantitative sense.
It extends beyond UI and web development too: Very often I find myself thinking that there must be a smarter way to use CLI tools like jq, zsh etc., but considering how rarely I use them and that I do already know an ineffective way of getting what I need, up until now I couldn't justify spending the hours of going through documentation on the moderately high chance of finding a few useful nuggets letting me shave off a minute here and there every month.
The same applies to SQL: After plateauing for several years (I get by just fine for my relatively narrow debugging and occasional data migration needs), LLMs have been much better at exposing me to new and useful patterns than dry and extensive documentation. (There are technical documents I really do enjoy reading, but SQL dialect specifications, often without any practical motivation as to when to use a given construct, are really not it.)
LLMs have generally been great at that, but being able to immediately run what they suggest in-browser is where Claude currently has the edge for me. (ChatGPT Plus can apparently evaluate Python, but that's server-side only and accordingly doesn't really allow interactive use cases.)
[1] https://xkcd.com/1205/
grp000|1 year ago
cluckindan|1 year ago
bowsamic|1 year ago
Omnipresent|1 year ago
jckahn|1 year ago
We’re getting there with some of the smaller open source models, but we’re not quite there yet. I’m looking forward to where we’ll be in a year!
Veuxdo|1 year ago
In many professions, $5000 for tools is almost nothing.
sigmar|1 year ago
zamadatix|1 year ago
If you want to pay that <$1k up front to just say "it was always just on my machine, nobody elses" then more power to you. Most just prefer this "pay as you go for someone else to have set it up" model. That doesn't imply it's unattainable if you want to run it differently though.
phony-account|1 year ago
I know we all love dunking on how expensive Apple computers are, but for $5000 you would be getting a Mac Mini maxed-out with an M4 Pro chip with 14‑core CPU, 20‑core GPU, 16-core Neural Engine, 64GB unified RAM memory, an 8TB SSD and 10 Gigabit Ethernet.
M4 MacBook Pros start at $1599.
unknown|1 year ago
[deleted]
bikamonki|1 year ago
caspg|1 year ago
ronyba|1 year ago