AI/ML is so abused these days, we stopped using them entirely while pitching our Startup even though we wrote two home-grown algorithms already. It has become an addendum, "We also wrote our own Machine Learning Algorithm and we train them against 1-million acres of high resolution satellite data."
Shameless plug but if you’re looking for ideas which are drastically less ambitious ones along which have a startling lack of AI/ML, I have a few ideas[0][1]
Half of these projects are so general that are laughable, a quarter is so ambitious that a small prize would play 0 role in the potential development of the solution.
Less:
Solve world peace.
Create "dynamic" organizations.
Establish a colony in Titan.
More:
- Increase by Z the yield of X staple in Y country within the next 5 years.
- Integrate 1 million people to the Internet for less than 1 USD/month/person.
The whole idea is for you to come up with a project, that's whythey are general. Not sure why people need detailed instructions to innovate and get funded.
Your "more" section lacks a purpose. You mention price of "connection" without additional details. One could argue that the main benefit of the internet, both ways communication of information worldwide, can be fulfilled with very low bandwidth and today can already be priced at 1 dollar per month per person.
What happens when we allocate resources to enable 1 dollar per month per person internet with megabits of bandwidth and this group of people use it to consume social media and streaming (aka "old days television") old day long?
Worth highlighting the last item, since guessing most won’t make it that far:
>> Critiquing our approach
>> Research That Can Help Us Improve
>> We’d love to fund research that changes our worldview—for example, by highlighting a billion-dollar cause area we are missing—or significantly narrows down our range of uncertainty. We’d also be excited to fund research that tries to identify mistakes in our reasoning or approach, or in the reasoning or approach of effective altruism or longtermism more generally.
In the same way dropping foreign aid on a country to ‘solve’ hunger can make the problem worse, EA money could distort market forces if it became big enough. I suspect it’s very difficult to find investments that return more net good than standard businesses.
#5 Biological Weapons Shelters: I wonder how many of these already exist that are just classified. Or if an alternative is just to invest in SpaceX or some other team with a vision of making humanity a multi-planetary species (or long term self-sustaining space habitats)
Looking at their grants page¹ this organisation has committed only a single grant under $50k and various grants over $1m, where does 'tiny amounts of money' come from?
For some reason I get an almost misanthropic vibe from a lot of these. Maybe I’m missing some context where they support eliminating global poverty and such in parallel, but it feels too much like the subtext is that they don’t feel that all the suffering and injustice in the world matter long term.
If humanity plays its cards right, there will be trillions and trillions of humans living in the future, all over the galaxy. Everyone who lives right now, or has ever lived, is barely noticeable compared to that.
I don't see any flaws in this argument, we all are on a fast-track to getting killed by AI, and the smartest people on the planet don't have a plan on how to fix it. "Build an AI that burns all GPUs to prevent all the future AIs, and hopefully doesn't kill us all in the process" is the best idea they've got.
If you believe that we're about to kill trillions and trillions of potential future human beings, these priorities make sense.
[+] [-] grangerg|3 years ago|reply
[+] [-] marban|3 years ago|reply
[+] [-] Brajeshwar|3 years ago|reply
[+] [-] hardwaresofton|3 years ago|reply
https://www.ycombinator.com/rfs
Shameless plug but if you’re looking for ideas which are drastically less ambitious ones along which have a startling lack of AI/ML, I have a few ideas[0][1]
[0]: https://unvalidated-ideas.vadosware.io/editions/009
[1]: https://unvalidated-ideas.vadosware.io
[+] [-] UmbertoNoEco|3 years ago|reply
Less:
Solve world peace.
Create "dynamic" organizations.
Establish a colony in Titan.
More:
- Increase by Z the yield of X staple in Y country within the next 5 years.
- Integrate 1 million people to the Internet for less than 1 USD/month/person.
[+] [-] arisAlexis|3 years ago|reply
[+] [-] bismuthcrystal|3 years ago|reply
[+] [-] O__________O|3 years ago|reply
>> Critiquing our approach
>> Research That Can Help Us Improve
>> We’d love to fund research that changes our worldview—for example, by highlighting a billion-dollar cause area we are missing—or significantly narrows down our range of uncertainty. We’d also be excited to fund research that tries to identify mistakes in our reasoning or approach, or in the reasoning or approach of effective altruism or longtermism more generally.
[+] [-] jlizzle30|3 years ago|reply
[+] [-] atlasunshrugged|3 years ago|reply
[+] [-] throwaway1777|3 years ago|reply
[+] [-] temptemptemp111|3 years ago|reply
[deleted]
[+] [-] paulpauper|3 years ago|reply
Solving big, difficult problems will require a lot.
[+] [-] mellonaut|3 years ago|reply
1. https://ftxfuturefund.org/our-grants/#grants
[+] [-] abecedarius|3 years ago|reply
> Do you have a limit on how much funding you’ll provide to an application?
> No.
[+] [-] seoaeu|3 years ago|reply
[+] [-] lumenwrites|3 years ago|reply
If humanity plays its cards right, there will be trillions and trillions of humans living in the future, all over the galaxy. Everyone who lives right now, or has ever lived, is barely noticeable compared to that.
Take a look at the recent Less Wrong post: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
I don't see any flaws in this argument, we all are on a fast-track to getting killed by AI, and the smartest people on the planet don't have a plan on how to fix it. "Build an AI that burns all GPUs to prevent all the future AIs, and hopefully doesn't kill us all in the process" is the best idea they've got.
If you believe that we're about to kill trillions and trillions of potential future human beings, these priorities make sense.