icebergwarrior's comments

icebergwarrior | 4 years ago | on: The only skin care that works? science video response (2020)

I wish they made some recommendations and application schedule. That statement alone makes is very hard to figure out which products are legit and which are not.

The average reader (me) just wants to buy 3 products, use them nightly, and move on, without doing hours of research.

icebergwarrior | 5 years ago | on: The “$20M to $500M” Question: Adding Top Down Sales

I would not recommend this. This will not help a founder with 2-5m in revenue and a team of 20-40 sell. This will be a waste of time.

If you want mentorship and you are a founder, I would directly message early VP's of sales and ask for help. Also, read the great Saastr blog by Jason Lemkin.

icebergwarrior | 5 years ago | on: Amazon Liable for Defective Third-Party Products Rules CA Appellate Court

I've ordered clothes, furniture (desks, chairs), books, supplements, rugs, and probably a few other things. Everything shows up, seems to be exactly as claimed, is functional, and has worked.

So my initial question still stands on what people are complaining is rife with fakes and knock-offs. It seems to be quite niche electronics (huge hard drives, computer components, etc)

And if the knock-offs are so good (in your game example) that the end user can't tell the difference (and no one else can either), what does it matter?

icebergwarrior | 5 years ago | on: Bogleheads

Please let us know how to consistently identify the 5-6 stock which will over-perform the market. You would be a billionaire if you could do this.

icebergwarrior | 5 years ago | on: OpenAI should now change their name to ClosedAI

I don't think you can make such definitive statements.

GPT-3 is certainly intelligent the way a lot of us would describe intelligence. It can produce content in a way this is indistinguishable from humans.

We don't know what else it can do. We don't know the pace of improvements happening here. There are a lot of open questions.

icebergwarrior | 5 years ago | on: Tempering Expectations for GPT-3 and OpenAI’s API

There needs to be a level of serious discourse that doesn't appear to currently be in the air, around what to do, international treaties, and repercussions.

I have no idea why people aren't treating this with grave importance. The level of development of AI technologies is clearly much ahead of where anyone thought it would be.

With exponential growth rates, acting too early is always seen as an 'overreaction', but waiting too long is sure to be a bad outcome (see, world re: coronavirus).

There seems to be some hope, in that as a world we seemed to have banned human cloning, and that has been around since dolly in the late 90s.

On the other hand, the USA can't seem to come to a consensus that a deadly virus is a problem, as it is killing its own citizens.

icebergwarrior | 5 years ago | on: Tempering Expectations for GPT-3 and OpenAI’s API

On a core level, why are you trying to create an AGI?

Anyone who has thought seriously about the emergence of AGI equates the chance that AGI causes a human extinction level event ~20%, if not greater.

Various discussion groups I am a part of now see anyone who is developing AGI to be equivalent to developing a stockpile of nuclear warheads in your basement that you're not sure won't immediately shoot off on completion.

As an open question. If one believes that 1. We do not know how to control an AGI 2. AGI has a very credible chance to cause a human level extinction event 3. We do not know what this chance or percentage is 4. We can identify who is actively working to create an AGI

Why should we not immediately arrest people who are working on an "AGI-future" and try them for crimes against humanity? Certainly, In my nuclear warhead example, I would immediately be arrested by the government of the country I am currently living in the moment they discovered this.

page 1