thrwayaistartup's comments

thrwayaistartup | 2 years ago | on: Why strive? Stephen Fry reads Nick Cave's letter on the threat of AI [video]

> AI lacks insight and lack of insight is what can turn a succulent feast of a life into biweekly deliveries of Soylent.

Does a picture of a humming bird lack insight? Does collage art lack insight? Do remixes lack insight? Does mass-produced formulaic pop music lack insight?

Maybe. Or maybe some artists enjoy those creative processes and some audiences enjoy the output. Maybe oil painters who critique photography, and photographers who critique collages, and musicians who critique mash-ups, and DJs who critique modern production studios, and, yes, artists who critique the use of AI models in creative processes, are all just being pretentious assholes.

(It is possible I am simply misunderstand Cave. I take most of his writing to be artistic prose. It's possible that these are sincere metaphysics and that Nick Cave does literally believe in some sort of ur-religious "essential and unconscious human spirit underpinning our existence". In which case I think he's got a nutty religion and consider the fact that AI is an existential threat to that religion mostly a net good for humanity.)

thrwayaistartup | 2 years ago | on: Why strive? Stephen Fry reads Nick Cave's letter on the threat of AI [video]

No. His critique is one of process, not of output quality.

He is asserting the existence of an "unconscious human spirit", asserting that ChatGPT "is fast-tracking the commodification of the human spirit by mechanising the imagination", and that we should fight against AI as we would fight against genocide: "just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world."

Incidentally, I agree with the importance of struggle and soul-work. I agree with the general valence of the lament, but come away with a very different call to action. In particular: I just don't think I have the right to impose luddism on the rest of the world for my particular niche while benefiting lavishly from the rest of the world's alienation form their much more essential labor.

To me, it's the ravings of an angry, arrogant, and entitled elitist who fears being dethroned from his comfortable luxury and refined status. And a hilariously hyperbolic one at that. Of all the actual evils I would write something like this about, AI Art probably isn't even in the top 1,000,000. Can you imagine looking at the world today and writing that last paragraph? My god.

---

Quotes:

> ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.

> ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary. That ‘songwriter ‘you were talking to, Leon, who is using ChatGPT to write ‘his’ lyrics because it is ‘faster and easier ,’is participating in this erosion of the world’s soul and the spirit of humanity itself and, to put it politely, should fucking desist if he wants to continue calling himself a songwriter.

> This impulse – the creative dance – that is now being so cynically undermined, must be defended at all costs, and just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world.

thrwayaistartup | 2 years ago | on: Why strive? Stephen Fry reads Nick Cave's letter on the threat of AI [video]

> A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI?

I have to admit not quite sure what you mean, and I do admit full guilt in starting us down the path of "mixed analogies" :). I'll try my best, though.

> Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?

I do think with respect to life-sustaining things -- medicine, pharma, food, shelter, water, energy -- that a combination of specialization and automation is necessary to increase the collective standard of living, and that labor alienation stems from a combination of specialization and automation.

Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.

Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).

thrwayaistartup | 2 years ago | on: Why strive? Stephen Fry reads Nick Cave's letter on the threat of AI [video]

> we had a much better connection with what it meant to be human when we were tilling dirt and making clay pots and weaving cloth for each other

Actually, I agree. I think Nick Cave is right about this. I do think this sort of alienation has a cost.

But that doesn't mean that there is any remotely moral case for undoing the green revolution and allowing billions to starve. And it does not mean that the machines which feed those billions of people who might otherwise starve are somehow the root cause of a decline of humanity. In fact, quite the opposite.

And this is the paradox: our alienation from agricultural work is precisely what enables our very existence.

My main observation is that there is a way out of this paradox. As it turns out, you can go out grow some food in a garden, or write a song, or paint a picture, even if that work is commodified and there is no paycheck. The commodification and automation of those industries does not prevent one from engaging in them as soul-work.

The teacher who plays in a band in his garage is no different -- from a "soul of humanity" perspective -- than Nick Cave. But Nick Cave's implicit argument argument demands that he is different, and not from an economic perspective, but from a very soul of humanity perspective. It's extraordinarily off-putting to me in that sense.

Of course, engaging in art as hobby instead of for pay does require free time and a share of returns on our societal bargain. On that note: elites like Nick Cave should be spearheading serious conversations about political economics and labor economics, instead of lamenting the loss of their extraordinarily unique status.

thrwayaistartup | 2 years ago | on: Why strive? Stephen Fry reads Nick Cave's letter on the threat of AI [video]

Nick Cave is expressing a personal loss, and I believe that he truly feels that loss. But to me, this letter reads roughly like: "if I were the server or the bouncer instead of the performer or the writer, all of humanity would cease to have meaning". Which is perhaps true, for Nick Cave. But it also betrays something grotesque and profoundly wrong about his view on the relationship between paid labor and the human soul.

It's a wonderful thing to find meaning in one's work, and for the things in which one finds meaning to be well-compensated. But it is no birthright. Contrary to Nick Cave's view, I can absolutely assure you that non-artists in HR departments and nursing stations and factory floors and classrooms often live full happy human inner lives. Those lives are of their own making and do not derive from the artiste class's output.

Manual production of high-quality clothes, tables, and glassware used to be the norm. Generations of people found meaning in these crafts before the industrial revolution changed the economics. People still do these things, only in rare cases as their primary way of making a living. Most art does not sustain developed world middle class existence. Most art is hobby. And that's okay.

The creation of software and AI systems is itself a form of craft-work and soul-work, which many engineers and scientists relate to the same way that Nick Cave relates to music. It is unclear to me why Nick Cave's striving is more important than the striving of engineers and scientists, or why his feeling of what humanity is, is more important than theirs.

thrwayaistartup | 2 years ago | on: My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4

...I think you missed the point. OAI/MS can sue the author or at least cut off API access. If that happens, the fact that OAI is under fire from NYT doesn't somehow obviate the author's need to cover some massive legal bills for the foreseeable future.

The NYT case could take years. In the meantime OAI could choose to go after ToS violators.

The legal system can accommodate more than one unresolved court case at a time. We don't like put a semaphore on related cases or anything like that. (Or, sometimes we do, but guess who you need to hire for many many billable hours to make that happen in your case?).

So, the legal system can accommodate the NYT case against OAI and an OAI case against the author. The operative question is: can the author's pocketbook also accommodate?

(Or, more to the point, can the author accommodate losing access to gpt4? What happens when he wants to launch a new feature or pivot to a new product?)

thrwayaistartup | 2 years ago | on: My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4

The academic work is pretty safe as long as it isn't productized. The open models have a prime facie case to stand on. Using output is okay if you aren't directly competing with openai, even according to their tos.

> (e) use Output (as defined below) to develop any artificial intelligence models that compete with our products and services. However, you can use Output to (i) develop artificial intelligence models primarily intended to categorize, classify, or organize data (e.g., embeddings or classifiers), as long as such models are not distributed or made commercially available to third parties and (ii) fine tune models provided as part of our Services

thrwayaistartup | 2 years ago | on: My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4

This is a flagrantly blatant violation of OpenAI's terms of use for businesses [1].

I have two issues with those terms:

1. I think that eventually US courts will determine one of two things: that OpenAI et al are guilty of massive infringement, or that these sorts of restrictive terms aren't enforceable. The need that these companies are trying to treat with terms on output seems unlikely to work out in the end. But we'll see.

2. Even if the terms are enforcable, the human review step in the tweet seems like it's make OpenAI's threading-the-needle position here even more fucking difficult to be taken seriously by any jury or judge.

However, enforcing the terms seems real damn hard in the case of small businesses... as long as you're not stupid enough to admit to violating them in a twitter thread, of course.

I think the author is probably safe from legal action for now because I don't think OpenAI is particularly eager to test the enforcability of their terms. And even if they are, doing so in this case is super high risk and super low reward. Still, I wouldn't test it by openly admitting to ToS violation like this. At the very least seems like a good way to get cut off from OAI APIs.

[1] https://openai.com/policies/business-terms

thrwayaistartup | 2 years ago | on: Do call yourself a programmer, and other career advice (2013)

I read this in 2013 and remember enjoying the back-and-forth. Some reflections, a decade of life experience and a tech cycle later:

1. I'm with McKenzie on the coworkers aspect of the dialog. More separation from coworkers is better. In the Good Times (2013-2019,2021) it seems "right" to trade some comp for familiarity and good work vibes, and almost... inhuman... not to. But in the Bad Times you're reminded that an Excel formula could cost you not just your job but also a big chunk of your personal social network. Diversification is good.

2. I now realize what both sides of this are getting at is basically: "how to progress from Junior/Mid Engineer to something after that". There are many paths. The conclusion of the article is good, in that respect. Also: you can just stay a Mid/Senior Engineer. That's okay.

3. Call yourself whatever you want/need to stay employable. Be a good colleague/person. Work is work.

thrwayaistartup | 2 years ago | on: The Great AI Weirding

Only a relatively tiny sliver of PhDs doing top-tier ML research are in groups that care about publishing at corps the care about publishing in academic conferences.

thrwayaistartup | 2 years ago | on: The Great AI Weirding

> Honestly, is there a big difference anymore?

Only a very small subset of industry cares about academic publishing, and even within that subset it's only a fraction of groups at a fraction of corps that consider publishing a primary or even secondary objective.

The groups that do care about those things can be good gigs, but are generally not the place in the company you want to be anyways, unless you can get in and out (for good) in <10 years. If you can do something that actually impacts the business -- that is actually useful to other humans -- no one gives a shit about h-indices or kaggle scores. And you'll be better compensated anyways.

thrwayaistartup | 2 years ago | on: The Great AI Weirding

Or just leave academia. In the US at least, the job is like 80% government contracting and 20% teaching.

Teaching is great, so there's that. But literally every company will let your ad junct, and Professor of Practice usually pays more than 20% of a faculty salary. You can supervise PhD students as interns or by taking a courtesy affiliation (and often even have more impact on those students than their overworked and under-engaged advisors). And university classroom teaching in the US now looks a lot more like 90s/mid-naughts high school teaching.

Government contracting sucks, and the academic variety is not any better. I'd literally whether watch paint dry at a military base than contract for DARPA. NSF isn't actually that much better.

Who the fuck wants to be a combination high school teacher and federal government contractor? Saints or sociopaths, and there are a LOT more of the latter than the former in higher ed.

thrwayaistartup | 2 years ago | on: The Great AI Weirding

The open secret is that top-quartile R1 CS faculty positions aren't coveted anymore and don't attract the best like they used to.

The choice is now between increasingly tenuous/meaningless tenure after 5-10 years and a $500K/year lower bound for 10-12 years. That choice is... not a hard choice for anyone who values intellectual freedom. And the right answer sure as shit isn't the faculty position.

A good 50% of those faculty chasing chasing NeurIPS papers are doing so because at least once before going up for tenure they will apply for positions at big tech. They end up coming on not just non-executive, but often outside of management and at the bottom of the (Top IC)-[1-2] total comp band. If they net an offer they'll usually leave. The major barrier to an offer is usually ego and "is this personal actually humble enough to be useful to other people".

thrwayaistartup | 2 years ago | on: The Tesla Semi from an Insider's View After One Year: "Hot Mess"

the obvious: the manager is being sloppy. Whether they are correct or not is irrelevant; thinking and writing clearly about these issues is literally this person's entire day job. A sloppy report is the tip of an iceberg of sloppier thinking.

The conspiracy: the manager has an axe to grind that aligns with incumbent auto-makers' business strengths, suggesting that the analysis is debased in one way or another.

The "mind explodes": the manager is talking their book; behind the manager is a team using mountains of data to design messages that optimally push the market in the direction they want. The purpose of the message is not to be logically coherent to nerds on the internet. The point is to convince a few portfolio managers to behave in one way or another, and perhaps also to signal sentiment analysis algos.

page 1