jharsman's comments

jharsman | 22 days ago | on: Two Years of Emacs Solo

Emacs solo actually contains functionality for just that, the below snippet which allows exporting xref buffers to grep format by pressing 'E'. You can then use wgrep etc.

  ;; Makes any xref buffer "exportable" to a grep buffer with "E" so you can edit it with "e".
    (defun emacs-solo/xref-to-grep-compilation ()
      "Export the current Xref results to a grep-like buffer (Emacs 30+)."
      (interactive)
      (unless (derived-mode-p 'xref--xref-buffer-mode)
        (user-error "Not in an Xref buffer"))

      (let* ((items (and (boundp 'xref--fetcher)
                         (funcall xref--fetcher)))
             (buf-name "*xref→grep*")
             (grep-buf (get-buffer-create buf-name)))
        (unless items
          (user-error "No xref items found"))

        (with-current-buffer grep-buf
          (let ((inhibit-read-only t))
            (erase-buffer)
            (insert (format "-*- mode: grep; default-directory: %S -*-\n\n"
                            default-directory))
            (dolist (item items)
              (let* ((loc (xref-item-location item))
                     (file (xref-file-location-file loc))
                     (line (xref-file-location-line loc))
                     (summary (xref-item-summary item)))
                (insert (format "%s:%d:%s\n" file line summary)))))
          (grep-mode))
        (pop-to-buffer grep-buf)))
    (with-eval-after-load 'xref
      (define-key xref--xref-buffer-mode-map (kbd "E")
                  #'emacs-solo/xref-to-grep-compilation))

jharsman | 2 years ago | on: Everyone hates the electronic medical record

This is very true. There several reasons why most EHRs are so bad:

1) The people who pay generally do not use the system. This is true for enterprise software in general and leads to vendors prioritizing having all features organizations ask for (regardless if they are a good idea or not) and also prioritizing features management deems important over fundamental workflow, UX and polish in general.

2) EHRs are very large and complex and can almost always gain more customers by gaining even more features and replacing smaller more specialized systems. A typical EHR will have features for ordering tests and viewing results (for clinical chmistry, microbiology, radiology and more special stuff like physiology etc), appointments and resource planning (rooms, equipment, personnel, staffing), clinical notes including computing scores and values based on other values, medication (ordering, administering, sending prescriptions electronically) and administration (admissions, discharge, payment, waiting lists). That is a lot of different stuff!

3) Once a vendor wins a contract and installs their EHR, very little can be gained by improving the lives of users. Contracts and sales cycles are very long, and the vendor gains very little financially by improving the system. So many vendors are focused on charging money for customer specific features or adding new features to win new tenders.

I'm not sure what the solution is, public alternatives have failed spectacularly since they are typically run by public administrators who have even less of a clue how to develop software and what users want than the vendors.

jharsman | 4 years ago | on: Mapping Perlin Noise to Angles

Yes, the standard solution to this is to use the curl if the scalar valued noise field. This gives you a vector field which is perpendicular to the gradient and i divergence free.

jharsman | 6 years ago | on: Mercurial’s journey to and reflections on Python 3

I was an early adopter of Mercurial and the teams insistence that file names were byte strings was the cause of lots of bugs when it came to Unicode support.

For example, when I converted our existing Subversion repository to Mercurial I had to rename a couple of files that had non ASCII characters in their names because Mercurial couldn't handle it. At least on Windows file names would either be broken in Explorer or in the command line.

In fact I just checked and it is STILL broken in Mercurial 4.8.2 which I happened to have installed on my work laptop with Windows. Any file with non ASCII characters in the name is shown as garbled in the command line interface on Windows.

I remember some mailing list post way back when where mpm said that it was very important that hg was 8-bit clean since a Makefile might contain some random string of bytes that indicated a file and for that Makefile to work the file in question had to have the exact same string of bytes for a name. Of course, if file names are just strings of bytes instead of text, you can't display them, or send them over the internet to a machine with another file name encoding or do hardly anything useful with them. So basic functionality still seems to be broken to support unix systems with non-ascii filenames that aren't in UTF-8.

jharsman | 9 years ago | on: Flickr – A Year Without a Byte

That's not how lossless compression of JPEGs work.

Besides removing information from the file that doesn't affect the rendered image (like EXIF data), lossless recompressors typically replace the huffman coding of DCT coefficients with a more efficient arithmetic coder. So you don't start over from raw pixels, but you replace the type of compression used with a more modern and efficient algorithm. That means ordinary software can't read the JPEG (since you've essentially created a new format) but you can just decompress into standard JPEG whenever someone wants to look at the image.

jharsman | 9 years ago | on: Show HN: WebGL Fire Simulation

Yes, the actual burning fuel part is just random noise, which doesn't look very very good. I mention it as a possible imrpovement under "Better looking fuel".

jharsman | 9 years ago | on: Show HN: WebGL Fire Simulation

I can't get your example code to work, but that is a completely different technique, ray marching a volume displaced by a noise function. This gives nice 3D-looking flames, but the movement tends to look like a scrolling noise function. And it's harder to use arbitrary burning shapes, my simulation supports drawing anything and it will burn.

jharsman | 10 years ago | on: Memory and C++ Debugging at Electronic Arts [video]

Traditionally TVs only support 60 Hz refresh rates (or 50 Hz for older PAL sets), so you either render a new frame for each frame the TV can refresh, or you display a frame for two TV refreshes.

Thi sisn't strictly true any more, since many TVs now support 72 Hz (to be able to display 24 fps content like film), but my guess is that doesn't have wide enough support to rely on.

jharsman | 10 years ago | on: Why Intel Added Cache Partitioning

You don't get high bandwidth utilization by pointer chasing unless you have many threads doing it and you switch threads while waiting on memory. That's true for GPUs, not for typical server workloads running on CPUs.

jharsman | 10 years ago | on: Why Intel Added Cache Partitioning

I find it really weird that the article says:

> It’s curious that we have low cache hit rates, a lot of time stalled on cache/memory, and low bandwidth utilization.

That's typical for most workloads! Software is almost never compute or bandwidth bound in my experience, but instead spends most of its time waiting on memory in pointer chasing code. This is especially true for code written in managed languages like Java (since everything typically is boxed and allocated all over the heap).

jharsman | 11 years ago | on: Programmer proverbs

Having static preferences that never change does indicate stagnation.

But the proverb in the form given is dumb.

jharsman | 13 years ago | on: Heroku Blog: Routing Performance Update

If your requests are CPU intensive, Node.js won't help since it doesn't support preemption.

And even if you're primarily IO-limited, a single request that consumes too much CPU will cause queuing.

jharsman | 13 years ago | on: Raspberry Pi now with 512MB RAM

Garbage collected systems typically implement allocation with a simple pointer bump. This is possible because values are moved in memory by the garbage collector, updating references automatically. You can then compact all the empty space when collecting garbage, making allocation easy.

This is obviously faster than malloc which is what people compare with when they say allocation is faster with a garbage collector. Collecting the garbage, i.e. de-allocation, can be more expensive though, since it might require scanning large parts of the heap.

Since games generally use region based allocators , the performance gain is probably very small there. If you make lots of calls to malloc, then the gain would be larger.

jharsman | 13 years ago | on: MySQL is bazillion times faster than MemSQL

Note that writing data to a single disk (or SAN array, or RAID controller) really isn't durable either, even if the the data does actually get to the disk and isn't in a write cache somewhere.

What if that disk crashes, or the SAN array brakes and kills all the data, or the data center burns down?

jharsman | 14 years ago | on: Why is Windows so slow?

Presumably to find out whether the difference lies with the filesystem or somewhere else?

If Linux is still much faster, even with the same filesystem, you have eliminated one variable.

page 1