icebergwarrior | 4 years ago | on: The only skin care that works? science video response (2020)
icebergwarrior's comments
icebergwarrior | 4 years ago | on: Facebook extends its work-at-home policy to most employees
icebergwarrior | 5 years ago | on: The “$20M to $500M” Question: Adding Top Down Sales
1. Performance (Paying for traffic / leads)
2. SEO (People click through to your site after googling you)
3. Viral (Your users share you)
Other than that there's no truly scalable D2C growth channels. First round has an entire article on this https://firstround.com/review/drive-growth-by-picking-the-ri... Good luck!
icebergwarrior | 5 years ago | on: The “$20M to $500M” Question: Adding Top Down Sales
If you want mentorship and you are a founder, I would directly message early VP's of sales and ask for help. Also, read the great Saastr blog by Jason Lemkin.
icebergwarrior | 5 years ago | on: The longest train ride in the world (2019)
icebergwarrior | 5 years ago | on: New academic journal only publishes 'unsurprising' research rejected by others
icebergwarrior | 5 years ago | on: Amazon Liable for Defective Third-Party Products Rules CA Appellate Court
So my initial question still stands on what people are complaining is rife with fakes and knock-offs. It seems to be quite niche electronics (huge hard drives, computer components, etc)
And if the knock-offs are so good (in your game example) that the end user can't tell the difference (and no one else can either), what does it matter?
icebergwarrior | 5 years ago | on: Professional sporting events increase seasonal influenza mortality in US cities
icebergwarrior | 5 years ago | on: Amazon Liable for Defective Third-Party Products Rules CA Appellate Court
icebergwarrior | 5 years ago | on: Bogleheads
icebergwarrior | 5 years ago | on: Remote work is not necessarily a good thing for the worker
icebergwarrior | 5 years ago | on: OpenAI should now change their name to ClosedAI
GPT-3 is certainly intelligent the way a lot of us would describe intelligence. It can produce content in a way this is indistinguishable from humans.
We don't know what else it can do. We don't know the pace of improvements happening here. There are a lot of open questions.
icebergwarrior | 5 years ago | on: Tempering Expectations for GPT-3 and OpenAI’s API
I have no idea why people aren't treating this with grave importance. The level of development of AI technologies is clearly much ahead of where anyone thought it would be.
With exponential growth rates, acting too early is always seen as an 'overreaction', but waiting too long is sure to be a bad outcome (see, world re: coronavirus).
There seems to be some hope, in that as a world we seemed to have banned human cloning, and that has been around since dolly in the late 90s.
On the other hand, the USA can't seem to come to a consensus that a deadly virus is a problem, as it is killing its own citizens.
icebergwarrior | 5 years ago | on: Tempering Expectations for GPT-3 and OpenAI’s API
Anyone who has thought seriously about the emergence of AGI equates the chance that AGI causes a human extinction level event ~20%, if not greater.
Various discussion groups I am a part of now see anyone who is developing AGI to be equivalent to developing a stockpile of nuclear warheads in your basement that you're not sure won't immediately shoot off on completion.
As an open question. If one believes that 1. We do not know how to control an AGI 2. AGI has a very credible chance to cause a human level extinction event 3. We do not know what this chance or percentage is 4. We can identify who is actively working to create an AGI
Why should we not immediately arrest people who are working on an "AGI-future" and try them for crimes against humanity? Certainly, In my nuclear warhead example, I would immediately be arrested by the government of the country I am currently living in the moment they discovered this.
The average reader (me) just wants to buy 3 products, use them nightly, and move on, without doing hours of research.