top | item 31920870

Project Ideas

75 points| apsec112 | 3 years ago |ftxfuturefund.org

29 comments

order
[+] grangerg|3 years ago|reply
Remember back in the day when we used to use the word "algorithm", instead of "AI"?
[+] marban|3 years ago|reply
Most AI in reality:

  switch(i) {
  case x:
    break;
  case y:
    break;
  default:
  }
[+] Brajeshwar|3 years ago|reply
AI/ML is so abused these days, we stopped using them entirely while pitching our Startup even though we wrote two home-grown algorithms already. It has become an addendum, "We also wrote our own Machine Learning Algorithm and we train them against 1-million acres of high resolution satellite data."
[+] UmbertoNoEco|3 years ago|reply
Half of these projects are so general that are laughable, a quarter is so ambitious that a small prize would play 0 role in the potential development of the solution.

Less:

Solve world peace.

Create "dynamic" organizations.

Establish a colony in Titan.

More:

- Increase by Z the yield of X staple in Y country within the next 5 years.

- Integrate 1 million people to the Internet for less than 1 USD/month/person.

[+] arisAlexis|3 years ago|reply
The whole idea is for you to come up with a project, that's whythey are general. Not sure why people need detailed instructions to innovate and get funded.
[+] bismuthcrystal|3 years ago|reply
Your "more" section lacks a purpose. You mention price of "connection" without additional details. One could argue that the main benefit of the internet, both ways communication of information worldwide, can be fulfilled with very low bandwidth and today can already be priced at 1 dollar per month per person. What happens when we allocate resources to enable 1 dollar per month per person internet with megabits of bandwidth and this group of people use it to consume social media and streaming (aka "old days television") old day long?
[+] O__________O|3 years ago|reply
Worth highlighting the last item, since guessing most won’t make it that far:

>> Critiquing our approach

>> Research That Can Help Us Improve

>> We’d love to fund research that changes our worldview—for example, by highlighting a billion-dollar cause area we are missing—or significantly narrows down our range of uncertainty. We’d also be excited to fund research that tries to identify mistakes in our reasoning or approach, or in the reasoning or approach of effective altruism or longtermism more generally.

[+] jlizzle30|3 years ago|reply
In the same way dropping foreign aid on a country to ‘solve’ hunger can make the problem worse, EA money could distort market forces if it became big enough. I suspect it’s very difficult to find investments that return more net good than standard businesses.
[+] atlasunshrugged|3 years ago|reply
#5 Biological Weapons Shelters: I wonder how many of these already exist that are just classified. Or if an alternative is just to invest in SpaceX or some other team with a vision of making humanity a multi-planetary species (or long term self-sustaining space habitats)
[+] throwaway1777|3 years ago|reply
If our bioweapon strategy is anything like our COVID strategy it’s nonexistent.
[+] paulpauper|3 years ago|reply
I don't think it's enough money. If they fund a lot of projects then it means each one may only get tiny amount.

Solving big, difficult problems will require a lot.

[+] seoaeu|3 years ago|reply
For some reason I get an almost misanthropic vibe from a lot of these. Maybe I’m missing some context where they support eliminating global poverty and such in parallel, but it feels too much like the subtext is that they don’t feel that all the suffering and injustice in the world matter long term.
[+] lumenwrites|3 years ago|reply
Take a look at the recent Kurzgesagt video: https://www.youtube.com/watch?v=LEENEFaVUzU

If humanity plays its cards right, there will be trillions and trillions of humans living in the future, all over the galaxy. Everyone who lives right now, or has ever lived, is barely noticeable compared to that.

Take a look at the recent Less Wrong post: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

I don't see any flaws in this argument, we all are on a fast-track to getting killed by AI, and the smartest people on the planet don't have a plan on how to fix it. "Build an AI that burns all GPUs to prevent all the future AIs, and hopefully doesn't kill us all in the process" is the best idea they've got.

If you believe that we're about to kill trillions and trillions of potential future human beings, these priorities make sense.