wmeddie's comments

wmeddie | 2 years ago | on: Destruction of nuclear bombs using ultra-high energy neutrino beam (2003) [pdf]

I don't think I've ever seen a sci-fi rendition of this concept. It means that an advanced enough alien species could come to a planet, from orbit disable all nuclear weapons and power plants, then start their invasion. Not as cinematic as a Death Star planet explosion, but sounds like a good tactical move for a galactic empire.

wmeddie | 2 years ago | on: Common Lisp: An Interactive Approach (1992)

I think you can argue that the pervasive use of notebooks is close enough for learning at least, but it's not as good for real development. The edit-and-continue features in Visual Studio for C# (and similar feature in Java) is the closest non-lisp thing we have these days. The languages aren't made for it like lisp are though, you have to do full restarts all the time.

I still wish there was an environment more like Smalltalk for Python.

wmeddie | 2 years ago | on: Ask HN: How to handle Asian-style “Family name first” when designing interfaces?

Definitely read https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-... if you haven't yet.

Then think about what are the requirements your system needs when it comes to names.

Does the app need to know what a user's name is at all or is a username enough? Does it need to distinguish the family part of their name for anything?

A thing I think is the most general is to just have a Full Name field (min length 1 and either John Doe or something cute as default) And a Nickname or Display Name field if your app needs to show something on screen.

wmeddie | 2 years ago | on: Ask HN: Is it just me or GPT-4's quality has significantly deteriorated lately?

Yes, people really need to know that unless you are using the browser plugin, you really shouldn't ask it questions like this. (A good rule of thumb I think is if you can't expect a random person on the street to get the question right without looking it up, you shouldn't expect GPT-4 to get it right either.)

Unfortunately for this question, even using the browser plugin it wasn't able to get the answer: https://chat.openai.com/share/6344f09e-4ba0-45c7-b455-7be59d...

wmeddie | 4 years ago | on: NEC’s Forgotten FPUs

All good questions.

1) It is a custom instruction set, you can rean the ISA guide over at https://www.hpc.nec/documentation

2) The main difference in simple terms is that AVX instructions have a fixed vector length (4, 8, 16 etc). With the SX the vector length is flexible so it can be 10, 4, anything up to the max_vlen (up to 256 on the latest ones). Essentially the idea is you have a single instruction that can replace a whole for loop. Without a good compiler though that means you have to re-write your nested loops.

3) There's currently two options when it comes to the compiler, you can use the proprietary NCC or use the open source LLVM fork NEC has. NCC is less compatible than GCC/Clang (particularly modern C++17 is problematic) but has a lot of advanced algorithms for taking your loops and rewriting them and vectorizing them automatically. The LLVM-fork currently supports assembly instruction intrinsics but they are still working on contributing better loop auto-vectorization into LLVM.

4) Porting software is not terribly difficult to get working, but quite a bit harder to get performing very well depending on the type of workload. Since the Scalar core is pretty standard, you can almost always take regular CPU code and get it running (unlike GPU code in general). If you don't leverage the vector processor though, the performance you get will be nothing special, especially at 1.6GHz. Most of the software made for it starts off as being CPU code and is then modified with pragmas or some refactoring to get it running with good performance on the VE. In almost all cases the resulting code still runs on a CPU just fine. One example of a project that supports both in a single code-base is the Frovedis framework[1].

I think the chip deserves a little more interest than it does. It's one of the few accelerators that you can 1) Buy today, right now 2) Has open source drivers [2] 3) Can run tensorflow [3]. The lack of fp16 support really hurt it for Deep Learning but it's like having a 1080 with 48 GB of RAM, still lots of interesting things you can do with that.

[1]: https://github.com/frovedis/frovedis [2]: https://github.com/veos-sxarr-NEC/ve_drv-kmod [3]: https://github.com/sx-aurora-dev/tensorflow

wmeddie | 6 years ago | on: Japan Turns to Coal After Closing Nuclear Power Plants

One thing this article doesn't cover which was a big part of this decision was nuclear waste. Nobody wants nuclear waste sites in their prefectures. There was a plan to use breeder reactors, but that hasn't gone anywhere, and the waste is just piling up at the reactor sites with absolutely no place to dispose it. When the mayor of Osaka even hinted at making an underwater waste site the backlash was huge. I doubt any other politicians will step up after that (nor will their parties allow it). So with waste being in this deadlock state for decades, there's just no way forward for nuclear here.

wmeddie | 6 years ago | on: Smartphones and Dematerialization

I also agree that this estimate is way off, but I think the idea is to think of the total energy use of the phone when you include the energy used by the servers that provide the services you use in apps. That is much harder to measure. Luckily Google, Facebook and Twitter invest a lot in energy saving at their data centers so it's one of the leanest ways of using compute power. I think that even when you put that energy into the equation, it's still nowhere near a refrigerator.

wmeddie | 6 years ago | on: Fast key-value stores: An idea whose time has come and gone

Systems like Erlang/OTP, Akka, Orbit and Orleans/Service Fabric are built on an actor model where domain objects of the system (e.g. Users, Accounts, Invoices, etc.) exist within the cluster and have a an address so they are like having a bunch of mini servers. These servers typically keep their state in memory so they can respond to query messages quickly. Plus the application can unload idle (or no longer necessary) actors and restore their state when they are needed again. It's very similar to the Linked in-memory key-value idea mentioned in the paper.

wmeddie | 6 years ago | on: Pock: Display macOS Dock in Touch Bar

This is pretty great. Only I wish it only showed when I clicked on the desktop (Since Finder does nothing else with the TouchBar) or swipe up from the bottom. Parallels actually does this same thing (minus swiping) for the task bar in Windows guests but switches to app specific bars when you open Office or a browser. I'm in the camp that finds the TouchBar really awesome. Never used function keys (except F5) but I use the TouchBar all the time in IntelliJ, iTunes, Word/Excel and it's great in PowerPoint when scrolling to just the slide you want.

I also love the clicky feel of the keyboard so maybe I'm just special...

wmeddie | 8 years ago | on: Michelangelo: Uber’s Machine Learning Platform

Skymind's SKIL does this. (I work on the team.) You don't have to send it your data though, you can just install it inside your system and have it connect to the data source or stream to train models (optionally doing hyper-parameter search), deploy them to REST servers and monitor their performance.

wmeddie | 9 years ago | on: MacBook Pro

More capacity and with one less display to drive. It should definitely last significantly longer.

wmeddie | 9 years ago | on: Ask HN: To those who became fluent in a second language, what did you do?

I know English, Spanish and Japanese. I started studying Japanese in college and I live and work in Japan now. Structured lessons were fundamental in becoming fluent but when I was here I sang a ton of Karaoke whenever I could. It really helped my pronunciation and vocabulary. Definitely recommend signing to learn language.

wmeddie | 9 years ago | on: I've Been Waiting For The Oculus Rift, But Now It's Sitting In My Closet

VR headsets that are attached to big PCs are very much a solo-thing at the moment. I can understand wanting to play a multi-player game like Halo while visiting a friend's house.

I got my Oculus Rift a few weeks ago and absolutely love it. Practically use it every day. Lucky's Tale was surprisingly good, and I can no longer Elite: Dangerous without the headset.

There's not enough software at the moment. I still want to see a good flight simulator and Altspace-like spaces with actual things to do with people.

page 1