GrantS's comments

GrantS | 5 months ago | on: Samsung taking market share from Apple in U.S. as foldable phones gain momentum

Wow, as others have said below, disabling the “Slide to Type” feature in Settings > General > Keyboard makes typing work well again on iOS. I cannot believe I put up with this awful typing experience for the past year/years. This should be broadcast more widely somehow. I’m sure many people have just assumed they got worse at typing. I am genuinely flabbergasted.

GrantS | 1 year ago | on: Meta Movie Gen

You can absolutely specify your own lyrics and structure to Suno.

GrantS | 1 year ago | on: Minuteman missile communications

A few small things as I think of them:

-To practice for the moment of a real launch command, he would receive encoded messages every day that had to be manually decoded as quickly as possible — this decoding would be done independently by him and the second person on duty, and they would then compare to make sure they matched. In the case of a real launch, not only would the two people in the underground facility need to agree that the command was issued, but a second team in another facility would need to do the same.

-He was not allowed to know the targets of the missiles he would be launching, though these targets were fixed for each missile.

-It was almost assumed that if they were launching, they would have already been hit on the surface by a nuclear weapon (locations of the launch facilities were not secret, because they wouldn’t be a deterrent if they were secret). The two people underground are positioned in what looks like a shipping container suspended inside a submarine hull, all encased and locked behind one giant thick steel (?) door. If the elevator shaft had collapsed during an impact, they would be stuck inside to die. So they did include an escape hatch in the roof, but buried deep underground — this would involve the two men opening the escape hatch, letting a bunch of sand fall through, and then digging upward through 100-ish feet of ground over many days to get to a surface that was a wasteland. He was never really convinced that this would work, but the men had to believe that if they did their jobs, there would be some way to survive it.

GrantS | 1 year ago | on: Minuteman missile communications

Coincidentally, I just toured the South Dakota minuteman launch control facility this week [1] and it was fascinating. The park ranger giving the tour was a veteran who manned the facility decades ago — amazing stories. You need to book tickets a few months in advance but well worth it if you’re in the area to visit Badlands, Mt. Rushmore, etc.

[1] Run by U.S. National Park Service: https://www.nps.gov/mimi/

GrantS | 3 years ago | on: Illusion Diffusion: Optical Illusions Using Stable Diffusion

Since the technical nugget is hidden in the code, the fun trick here is to alternate on odd and even steps between moving toward a duck in the latent space image and moving toward a rabbit in the 90-degree rotated version of the latent space image.

(Normally you would feed the output of step n right back in as input to step n+1. That’s what is not happening as usual here.)

GrantS | 3 years ago | on: Bard and new AI features in Search

Possibly a reference to the 1956 Isaac Asimov short story "Someday": https://en.wikipedia.org/wiki/Someday_(short_story)

"The story concerns a pair of boys who dismantle and upgrade an old Bard, a child's computer whose sole function is to generate random fairy tales. The boys download a book about computers into the Bard's memory in an attempt to expand its vocabulary, but the Bard simply incorporates computers into its standard fairy tale repertoire..."

GrantS | 3 years ago | on: I Asked ChatGPT to Explain Some Jokes to Me

Yes, the situation works. The fact that the first two logicians say “I don’t know” means they do want beer but they are unsure if everyone wants beer. The last logician knows this and can say “yes” at the end. If either of the first two logicians did not want beer they would also know that by definition it would mean “everyone wants beer” would be false and so they would have just said “no”.

GrantS | 3 years ago | on: Stack Overflow questions are being flooded with answers from ChatGPT

I noticed this phenomenon when asking for a story involving Stephen Hawking, Darth Vader, and Wookiees. I spelled it “Wookies” by mistake and ChatGPT used my misspelling throughout the story despite seemingly getting all the other details correct.

Edit: Part of the generated story:

As they continued to talk, Stephen and Darth realized that they had something else in common - they both had lost the use of their biological legs. Stephen had been diagnosed with ALS at a young age and had gradually lost the ability to move his legs, while Darth had lost his legs in a battle with Obi-Wan Kenobi on the planet of Mustafar.

Stephen and Darth discussed the challenges and obstacles that they had faced as a result of their mobility issues, and how they had adapted and overcome them. They also talked about the technological advancements that had allowed them to continue their work and pursue their passions, despite their limitations.

Suddenly, Stephen and Darth turned on each other, each revealing that they had been secretly plotting against the other. Stephen accused Darth of using the Force for evil and corrupt purposes, while Darth accused Stephen of using his scientific knowledge to create weapons of mass destruction.

GrantS | 4 years ago | on: A Shader Trick

Aha! This explains a 20-year-old mystery bug! In undergrad, one of my side projects was a procedural 3D world from scratch (kind of Myst meets Halo), including physics, graphics (OpenGL), and the raw synthesized sound of ocean waves and plasma grenade explosions from sine waves via granular synthesis. After about 10 minutes exploring the world (or running around throwing tiny spheres down hills and scattering them with plasma grenades) my relaxing synthesized ocean wave sounds started to sound garbled and distorted so I always had to restart the program — now I know why :)

GrantS | 4 years ago | on: Chatbots: Still dumb after all these years

Yes, this is both the most misunderstood and the most amazing (and unpredicted?) thing about GPT-3 et al. You have to tell it who it is and what the rules are. And it gains or lacks knowledge depending upon who it thinks it is answering as. You’ll find that Stephen Hawking doesn’t have much to say about the Ninja Turtles while celebrities know nothing about black holes. It does not answer questions or have expertise unless you tell it that it is answering questions and has expertise. (And it is incredible that this is the case, and somewhat understandable that people don’t understand this.)

GrantS | 5 years ago | on: Algorithmic Theories of Everything (2000)

ELI5 is: If the universe is computable, might we be in the simplest computable universe or the fastest computable universe? Can computer science and math help us figure out the reasons for the physics that run our world?

GrantS | 5 years ago | on: Are we in an AI Overhang?

I thought the overhang was going to be along the lines of the following, whether realistic or not:

-GPT-3, as is, should be the inner loop of a continuously running process which generates 1000s+ of ideas for "how to respond next" to any query, with a separate network on top of it as the filter which cherry-picks the best responses (as humans are already doing with the examples they are posting)

-Since GPT-3, as is, can already predict both sides of a conversation, it can steer a conversation toward a goal state just like AlphaGo does by evaluating 1000s+ of potential moves, lots of potential responses and counter-responses until it finds the best thing to say in order to get you to say what it "wants" you to say.

It seems ready to go as the initial attempt at the inner loop of both of these tasks (and more) without modification or retraining of the core network itself, no?

GrantS | 7 years ago | on: Childhood's End: The digital revolution has turned into something else

Agreed, it takes awhile to get there but the core idea comes toward the end:

The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.

GrantS | 8 years ago | on: A photographer captures the paths that birds make across the sky

Beautiful -- several years ago, I created an iOS app that does this live on screen in real time (via CoreImage filters, saving a video of the trails building up) but then never released it. I should probably just put it out there -- it was all ready to go except for a name if I recall, made a video set to music and everything. Will see if I can dig up the video...

Edit: 45 second video here for anyone interested: https://youtu.be/df_Pr4jAu78

GrantS | 8 years ago | on: Amazon Echo Look

100% correct. There is no way they included a depth sensor in this device just to blur the background. They may also be aiming to map any Amazon clothing purchases to 3D body shapes for a purely data driven approach to clothing recommendation to start with (as opposed to having 3D scans of every bit of clothing to do geometric fitting, which is prohibitively labor intensive.) So they'll know your shape, what exact items you've purchased, how often you wear those purchases, and (if I understood correctly) how your friends think you look in them.

GrantS | 9 years ago | on: NHTSA’s full investigation into Tesla’s Autopilot shows 40% crash rate reduction

Here is some of that IIHS research [1] from Jan 2016 that gives lots of raw crash numbers broken down by manufacturer, type of system (AEB/FCW), injuries involved, etc. Really informative.

Some excerpts from "Effectiveness of Forward Collision Warning Systems with and without Autonomous Emergency Braking in Reducing Police-Reported Crash Rates", January 2016:

"FCW alone and FCW with AEB reduced rear-end striking crash involvement rates by 23% and 39%, respectively. "

"Among the 15,802 injury crash involvements in these states, the percentage of injury crash involvements that were rear-end striking crashes was larger among vehicles without front crash prevention (15%) than among vehicles with FCW alone (12%) or FCW with AEB (9%). Only 4% of rear- end injury crashes involved fatalities or serious (A-level) injuries."

"Approximately 700,000 U.S. police-reported rear-end crashes in 2013 and 300,000 injuries in such crashes could have been prevented if all vehicles were equipped with FCW with AEB that performs similarly as it did for study vehicles."

[1] http://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/II...

page 1