(no title)
emrehan | 1 year ago
* This is a response to Jan’s claims: https://x.com/janleike/status/1791498174659715494
* There’s nothing concrete on AI alignment in the response.
* We don’t know when AI alignment could be developed.
* AI ethics / AI bias are necessary, but different concerns than AI alignment.
* We don’t know when existential-risk posing AI could be developed.
* There’s some risk of human extinction due to development of x-risk posing AI before AI alignment.
* OpenAI is at the frontier of AI development, which risks human extinction, without allocating sufficient resources to AI alignment.
I am uneasy with subscribing to ChatGPT…
bjackman|1 year ago
emrehan|1 year ago
unknown|1 year ago
[deleted]