n00b101's comments

n00b101 | 1 year ago | on: Programming as Theory Building (1985) [pdf]

I think you nerds need to stop reading obsolete academic fad papers from 1985. Imagine if your girlfriend was unironically reading articles of Cosmo from 1985 to figure out what to wear.

A computer program is a "model" of some thing. For example:

    float m = 1e10f;
    float a = 9.8f;
    float F = m*a;
Another example:

    if(employee is still employed):
       float paycheque = getSalary(employee);

    else: 
       float paycheque = 0.00f;

n00b101 | 1 year ago | on: The Unisys Icon: One Canadian Xennial's Memories of Ontario's Obscure Computer

Consider yourself lucky.

My public high school in Ontario was supposed to be a "magnet school for the gifted" and instead turned out to be a scam.

The computer class teacher was absent for a year, and the substitute teacher insisted that the keyboard and mouse cords should be neatly arranged at the end of each class as if it was a knitting class. The "coursework" consisted of learning how to type out "business memos" using a word processor.

The school believed that this was an important skill and imagined that we would be writing "memos" on computers and printing them out in the "business world."

I skipped every class I could to hang out with my girlfriend and got out with a 2.0 GPA.

The school in question has since been demolished. The whole scam was to try to prevent the school from being demolished due to low performance, so they pretended to be a "magnet school for the gifted."

n00b101 | 1 year ago

Is anyone surprised by this? Do any guys remember women getting higher grades in college?

n00b101 | 1 year ago | on: A Hamiltonian Circuit for Rubik's Cube

FYI, it would take approximately 99.3 billion years to complete the Hamiltonian circuit of the Rubik's cube’s quarter-turn metric Cayley graph using the GAN 12 Maglev UV Coated 3x3 Rubik's cube.

n00b101 | 1 year ago | on: Ask HN: Who wants to be hired? (November 2024)

Location: Toronto, ON

Remote: Yes

Willing to relocate: Yes

Technologies: Python, Machine Learning, C/C++, SQL, HTML/CSS/JS, Node.js, Embedded Computing (Arduino, FPGA)

Résumé/CV: https://eclipse-consulting.github.io/cv.pdf

Email: [email protected]

Portfolio: https://eclipse-consulting.github.io/

I am a full-stack software engineer with a background in applied mathematics, high performance computing and real-time systems. Currently working on a side project involving numerical computing and AI models.

I am interested in full-time and/or contract opportunities.

n00b101 | 1 year ago | on: What is theoretical computer science?

No. It's humour.

There is no such thing as the Second Law of Thermodynamics of a Turing Machine.

Unless! You turn the machine off. Then energy input equals zero, it becomes a closed system, and entropy kicks in.

n00b101 | 1 year ago | on: What is theoretical computer science?

Ah, Professor Vardi, a fascinating case study in our department. His devotion to the 'science' in computer science is truly something to behold. It's not every day you see someone try to reconcile Turing machines with the second law of thermodynamics ...

Dr. Vardi's Second Law of Thermodynamics for boolean SAT and SMT (Satisfiability Modulo Theory) solvers is truly a marvel of interdisciplinary ambition. In his framework, computational entropy is said to increase with each transition of the Turing machine, as if bits themselves somehow carry thermodynamic weight. He posits that any algorithm—no matter how deterministic—gradually loses "information purity" as it executes, much like how heat dissipates in a closed system. His real stroke of genius lies in the idea that halting problems are not just undecidable, but thermodynamically unstable. According to Dr. Vardi, attempts to force a Turing machine into solving such problems inevitably lead to an "entropy singularity," where the machine's configuration becomes so probabilistically diffuse that it approaches the heat death of computation. This, he claims, is why brute-force methods become inefficient: they aren’t just computationally expensive, they are thermodynamically costly as well. Of course, there are skeptics who suggest that his theory might just be an elaborate metaphor stretched to breaking point—after all, it’s unclear if bits decay in quite the same way as particles in a particle accelerator.

n00b101 | 1 year ago | on: Machine learning and information theory concepts towards an AI Mathematician

### *Formalization and Implementation*: While the paper lays out a theoretical framework, its practical implementation may face significant challenges. For instance, generating meaningful mathematical conjectures is far more abstract and constrained than tasks like generating text or images. The space of potential theorems is vast, and training an AI system to navigate this space intelligently would require further breakthroughs in both theory and computational techniques.

### *Compression as a Measure of Theorem Usefulness*: The notion that a good theorem compresses provable statements is intriguing but may need more exploration in terms of practical utility. While compression aligns with Occam's Razor and Bayesian learning principles, it's not always clear whether the most "compressed" theorems are the most valuable, especially when considering the depth and complexity of many foundational theorems in mathematics.

### *Human-AI Collaboration*: The paper lightly touches on how this AI mathematician might work alongside humans, but the real power of such a system might lie in human-AI collaboration. A mathematician AI capable of generating insightful conjectures and proofs could dramatically accelerate research, but the interaction between AI and human intuition would be key.

### *Computational and Theoretical Limits*: There are also potential computational limits to the approach. The "compression" and "conjecture-making" frameworks proposed may be too complex to compute at scale, especially when considering the vast space of possible theorems and proofs. Developing approximation methods or heuristics that are effective in real-world applications will likely be necessary.

Here's how we can unpack this paper:

### *System 1 vs. System 2 Thinking*: - *System 1* refers to intuitive, fast, and automatic thinking, such as recognizing patterns or generating fluent responses based on past experience. AI systems like GPT-4 excel in this area, as they are trained to predict and generate plausible content based on large datasets (e.g., text completion, language generation). - *System 2* refers to deliberate, logical, and slow thinking, often involving reasoning, planning, and making sense of abstract ideas—such as solving a mathematical proof, engaging in formal logic, or synthesizing novel insights. The claim that AI lacks System 2 abilities suggests that while AI can mimic certain behaviors associated with intelligence, it struggles with tasks that require structured, step-by-step reasoning and deep conceptual understanding.

### "Not so much in terms of mathematical reasoning"

The claim is *partially true*, but it must be put into context:

   - **Progress in AI**: AI has made **tremendous advances** in recent years, and while it may still lack sophisticated mathematical reasoning, there is significant progress in related areas like automated theorem proving (e.g., systems like Lean or Coq). Specialized systems can solve well-defined, formal mathematical problems—though these systems are not general-purpose AI and operate under specific constraints.

   - **Scope of Current Models**: General-purpose models like GPT-4 weren't specifically designed for deep mathematical reasoning. Their training focuses on predicting likely sequences of tokens, not on formal logic or theorem proving. However, with enough specialized training or modularity, they could improve in these domains. We’ve already seen AI systems make progress in proving mathematical theorems with reinforcement learning and imitation learning techniques.

   - **Frontiers of AI**: As AI continues to develop, future systems might incorporate elements of both System 1 and System 2 thinking by combining pattern recognition with symbolic reasoning and logical processing (e.g., systems that integrate neural networks with formal logic solvers or reasoning engines).

### Conclusion: AI excels in tasks involving intuitive, pattern-based thinking but struggles with deliberate, goal-oriented reasoning required for deep mathematical work. However, as research evolves—especially in hybrid models that combine deep learning with symbolic reasoning and formal logic—these limitations may become less pronounced.

The future of AI may very well involve systems that are capable of the same level of mathematical reasoning (or better) as "human experts."

n00b101 | 1 year ago | on: Ask HN: Who wants to be hired? (October 2024)

Location: Toronto, ON

Remote: Yes

Willing to relocate: Yes

Technologies: Python, Machine Learning, C/C++, SQL, HTML/CSS/JS, Node.js, Embedded Computing (Arduino, FPGA)

Résumé/CV: https://eclipse-consulting.github.io/cv.pdf

Email: [email protected]

Portfolio: https://eclipse-consulting.github.io/

I am a full-stack software engineer with a background in applied mathematics, high performance computing and real-time systems. Currently working on a side project involving numerical computing and AI models.

I am interested in full-time and/or contract opportunities.

n00b101 | 1 year ago | on: Ask HN: Does this AI generated physics paper make any sense? [pdf]

I could have spent time getting it to generate more formulas and diagrams. It's not difficult since it generates LaTeX, but I wouldn't be able to understand them.

According to ChatGPT the paper makes sense to ChatGPT but isn't rigorous enough. Then if I put that into a feedback loop, it starts producing "more rigorous" output.

It's a little concerning so I decided to stop here. It seems it could produce a wall of mathematical text that would take a professional a year to read.

n00b101 | 1 year ago | on: Ask HN: Does this AI generated physics paper make any sense? [pdf]

I spent the weekend running ChatGPT in a loop on a theoretical physics paper. This was the result. It seems to have come up with some type of conjecture.

I'm wondering if anyone who understands Langlands program mathematics or Quantum Field Theory can opine on whether it's gibberish or makes real sense.

It seems interesting, however I have only a cursory understanding of Quantum Field Theory and don't know anything about the Langlands program.

n00b101 | 9 years ago | on: Japan to Unveil Pascal GPU-Based AI Supercomputer

> As someone who works at a deep learning chip startup, this is great news! Looks like there's a market for our chips ;)

While there may be a market for your chips, I'm curious why you think the K computer is in that market? National supercomputers, like Japan’s RIKEN "K" supercomputer, are used for many different applications (for example, physics and engineering simulations) - not just "AI." The multi-purpose use of such machines is what justifies their multi-billion dollar budgets in the first place. I can't imagine a government spending billions of dollars on a machine that only has one function (e.g. neural net training).

The history of HPC hardware is littered with special-purpose HPC microarchitectures that were eventually abandoned in favor of general-purpose processors. The one lasting exception to this has been GPUs, which have proven to be a boon to HPC applications and sparked the Deep Learning renaissance in machine learning. The difference with GPUs is that they were not strictly aimed at HPC applications. Obviously, they are used for graphics rendering in gaming, professional graphics and CAD. There are hundreds of millions of GPUs deployed for gaming and other graphics applications. The application of GPUs to HPC came later, and the specific application to deep neural networks came later still. GPUs are successful because they are a form of commodity hardware and have a wide range of applications. In a sense, hard-core gamers have become the R&D funding source for state-of-the-art HPC processors. This healthy and diversified ecosystem is what allows for the long-term sustainability of the microarchitecture.

You can always build a more efficient machine by specializing it to a narrow application. In the extreme case, you can just build a custom ASIC that has some fixed function. That would be the ultimate solution in efficiency, but things become less sustainable when you need to continuously compete with alternative solutions - the cost of competing in this space is astronomical, and there needs to be sustainable source of funding for that activity. This is why the HPC industry is completely dominated by Intel/AMD/NVIDIA processors, instead of custom ASICs that (for example) could perform some fixed matrix operations.

Having said that, there is a vague opportunity on the horizon if and when Moore's Law scaling completely fizzles out. Conceivably, after processor node scaling completely ends and the established microarchitectures have been completely optimized to death, the industry will reach a state where competition on performance/efficiency stalls because there is no major next-gen CPU or GPU because nothing more can be done to improve the product while maintaining its general-purpose applicability. At that stage, a significant opportunity could open up for special-purpose processors and it could be sustainable since the field would be far less competitive.

n00b101 | 9 years ago | on: Oh, you’re with them? (2016)

I don't think this kind of thing is isolated to the tech industry. Edith Cooper at Goldman Sachs shared this recently:

"I am a black woman, a mother, a wife and a professional. I am the daughter of a dentist and a sister to four siblings. I’m a runner, a golfer and a knitter. I graduated from an Ivy League school and earned an MBA. I’ve spent the past 30 years working on Wall Street, half of those as a partner at Goldman Sachs.

I am frequently asked “what country are you from” (I grew up in Brooklyn). I’ve been questioned about whether I really went to Harvard (I did) or how I got in (I applied). I’ve been asked to serve the coffee at a client meeting (despite being there to “run” the meeting) and have been mistaken as the coat check receptionist at my son’s school event. And, on the flip side, it’s also been suggested to me that I’m not “black black”because of the success I have had, or even where I live ... People frequently assumed I was the most junior person in the room, when in fact, I was the most senior. I constantly needed to share my credentials when nobody else had to share theirs ..." [1]

[1] http://www.businessinsider.com/edith-cooper-goldman-sachs-on...

n00b101 | 9 years ago | on: A Review of Modern Sail Theory (1981) [pdf]

I think a convincing refutation of the Bernoulli effect as an explanation of flight is the fact that some aircraft have symmetrical airfoils and can fly upside down. This is a less convincing argument to sailors whose boats don't usually sail very well upside down :)
page 1