bisrig
|
1 year ago
|
on: I ate and reviewed every snack in our office kitchen
bisrig
|
1 year ago
|
on: Investigating an “evil” RJ45 dongle
I'm not sure what the current state of the art is, but for the longest time it was pretty common for USB peripheral ICs to have small flash devices attached to them in order to be able to store VID/PID and other USB config information, so that the device is enumerated correctly when it's plugged in and can be associated with the correct driver etc. And depending on when the device was designed, 512kB might have been the smallest size that was readily available via supply chain. It would not have been strange to use a device like that to store 10s of bytes!
The ISO thing is a little bit weird, but to be honest it's a creative way to try to evade corporate IT security policies restricting mass storage USB devices. I think optical drives use a different device class that probably evades most restrictions, so if you enumerate as a complex device that's a combo optical drive/network adapter, you might be able to install your own driver even on computers where "USB drives" have been locked out!
bisrig
|
5 years ago
|
on: Using Microsoft Word with Git
Back in the bad old days of version control (thinking of VSS here), I was overall pretty satisfied with how the check-in/check-out mechanics worked for Word docs and the like. In this case you have the benefit of the sequential workflow, in fact enforced or hinted by the tool itself, while also getting rid of the recurrent weakness of email-based document storage. There were plenty of other things to dislike about VSS (like, pretty much the rest of them) but it wasn't so bad for maintaining documents.
bisrig
|
5 years ago
|
on: Robinhood now valued at $11.2B with new fund backing
Agree completely. I think Vanguard's UI is trying to send the message "if you're logging in you're doing it wrong".
bisrig
|
6 years ago
|
on: Highlights from FPGA 2020
The StateMover concept sounds pretty interesting and is almost like the reverse of the integrated-logic-analyzer approach that the major vendors have adopted in their tooling. I assume that in simulation land that your debug environment is based on timing simulation, which unless they've "fixed" the net name mangling, is not exactly pain-free in its own right.
bisrig
|
6 years ago
|
on: Highlights from FPGA 2020
I don't think that long lines scaled particularly well with increasing number of LUTs and clock rates. All the black magic voodoo that goes into matching prop delay for resources like that tends to be applied to clock distribution. At least that's how it was a few years ago.
bisrig
|
6 years ago
|
on: Rules to run a software startup with minimum hassle
Something that I think is missing in the discussion here with regard to payment methods is: know your market segmentation. If you are targeting b2B (i.e. large business), there are going to be a lot of circumstances where credit card payments are a non-starter.
From personal (F500) experience, I know that I am going to have to move mountains in order for purchasing to accept a commercial arrangement with monthly credit card payments, which means I will usually move on to a competitive solution if one exists. In fact, one of the first questions I usually ask a vendor is "do you sell through (preferred reseller already listed as an approved vendor in our purchasing system)" as I know this is going to make my job of getting the purchase approved 100x easier.
So in conclusion, know your market segmentation and how your potential customers' expectations for how they will do business with you.
bisrig
|
6 years ago
|
on: The FAA Proposal for Drone Remote ID
Am I reading this right that the standard remote ID broadcast is specified as "something in an unlicensed band, everything else about it you figure it out"? Isn't the point of this to be interoperable with other receiver systems for things like BVLOS operation? Seems like a funny place to throw in a shoulder shrug.
bisrig
|
6 years ago
|
on: How Radar Works
I know that this doesn't directly address the question you're asking, but to give an idea of the order of magnitude of the effect: the Doppler shift in frequency rounds to f_carrier * 2v/c. In the case of anything with "reasonable" speed, 2v/c is going to be very small (~10e-6 for Mach 1), and thus you would be talking about very minute differences between the transmitted and received pulse in terms of either overall pulse length or number of wavefronts received vs. sent (for what it's worth my intuition is that the pulse length actually shortens, but either way it's not measurable by the receiver).
bisrig
|
6 years ago
|
on: Using SDRAM in FPGA Designs
As an aside: it looks like the timing diagrams in this article were created with a tool called WaveDrom. I've used this tool in the past and been impressed with what it's able to do in terms of creating nice timing diagrams for digital design documentation, a critical part of communicating how these designs (and interfaces!) are supposed to work.
bisrig
|
6 years ago
|
on: How to set up Xilinx Vivado for source control
I have been through a similar process myself a couple years back, and just wanted to say that this is a really good effort in trying to tame a beast that doesn't really want to be tamed. Lots of experimenting around with write_project_tcl and figuring out what settings are needed to coerce the tool into making version control/CI friendly decisions. Nice work!
bisrig
|
6 years ago
|
on: FPGAs Have the Wrong Abstraction for Computing
I think the heart of what the article is getting at is represented well by the following quote:
"To let GPUs blossom into the data-parallel accelerators they are today, people had to reframe the concept of what a GPU takes as input. We used to think of a GPU taking in an exotic, intensely domain specific description of a visual effect. We unlocked their true potential by realizing that GPUs execute programs."
Up until the late 2000s, there was a lot of wandering-in-the-wilderness going on with respect to multicore processing, especially for data-intensive applications like signal processing. What really made the GPU solution accelerate (no pun intended!) was the recognition and then real-world application (CUDA & OpenCL) of a programming paradigm that would best utilize the inherent capabilities of the architecture.
I have no idea if those languages have gotten any better in the last few years, but anything past a matrix-multiply unroll was some real "here be dragons" stuff. But: you could take these kernels and then add sufficient abstraction on top of them until they were actually usable by mere humans (in a BLAS flavor or even higher). And even better if you can add in the memory management abstraction as well.
Point being: still not there for FPGA computation, though there was some hope at one time that OpenCL would lead us down a path to decent heterogeneous computing. Until there's some real breakthroughs in this area though, the best computation patterns that are going to map out using these techniques are the things we're already targeting to either CPUs or GPUs.
bisrig
|
7 years ago
|
on: Haversine Formula
bisrig
|
8 years ago
|
on: Amazon engineer will let strangers manage his $50,000 stock portfolio 'forever'
Day trading has a specific definition in this context (can't remember if it's SEC, FINRA, or some other acronym), and refers to the buying and selling of the same security with a single day. That $2.5k Fidelity account will go out of its way to tell you that you're not allowed to do that.
bisrig
|
9 years ago
|
on: AWS EC2 FPGA Hardware and Software Development Kit
I spent a few minutes poking around their project scripts, and it looks like that the design file you upload back to EC2 is a post-routed design checkpoint, which is an interesting choice... I'll bet that they are doing partial reconfiguration with the I/O ring as the "static" layer.
bisrig
|
9 years ago
|
on: AWS EC2 FPGA Hardware and Software Development Kit
I think the parent comment to yours (or at least the edited version) is right, they are almost surely XCVU9Ps based on the logic element and DSP counts. The ram numbers listed in the product table are embedded memories (BlockRAMs in Xilinx-speak), DDR4 wouldn't be a spec-feature of the FPGA as it's external to the part on the PCB.
Avnet web pricing is $27k for the -1 speed grade, extended (not industrial) with 12-week lead. Safe to say Amazon is getting a better deal on both counts.
I suspect the add-in card itself is either this or something similar to it, based on specs etc.
http://www.bittware.com/xilinx/product/xupp3r/. I didn't see pricing info readily available for that, but a similar card with less on-board DDR4 runs about $7k straight from Xilinx:
https://www.xilinx.com/products/boards-and-kits/ek-u1-vcu118...
Edit: Oh yeah, I forgot to add: I thought it was funny that Amazon's page referred to "logic elements" given that this was traditionally an Altera term - Xilinx preferred/prefers "logic cells".
bisrig
|
9 years ago
|
on: PYNQ – Python Productivity for Zynq
To add to this - a lot (maybe all?) of the functionality that is described by the concept of "overlays" maps to the design and implementation of the programmable logic in a coprocessor system. To use Xilinx terms, this would be the block diagram that describes the periphery that connects to either a soft- or dedicated-core processor. The block diagram will control the usage of programmable logic cores and provide hooks for the BSP generator and SDK to wrap these in ways useful for software development - memory maps & base addresses, associated drivers, etc.
The big value add that I see here is that the management of these overlays has traditionally been painful, especially when trying to swap overlays at runtime - this is always sold as one of the big benefits of SoC processor+FPGA, that you can do things like dynamic hardware accelerators based on what software is currently running, but more than some assembly has always been required. This seems like a nice step in the direction of having reasonable mechanisms to reconfigure based on application.
bisrig
|
9 years ago
|
on: Qualcomm’s NXP Deal Is a $47B Wager on Computers You Drive
I would be careful of trivializing the last point that you made. Things like extremely long design cycles, environmental requirements, expectations of value-add engineering services, product lifetimes etc. make the auto market different in significant ways from consumer electronics. If you're a company like NXP, these are core competencies. If you are oriented towards consumer market, these are annoyances as best and serious cultural challenges at worst.
bisrig
|
10 years ago
|
on: Virtual Planes, Virtual Airports: Inside the World of VATSIM
In addition to the procedural issues already addressed, let me point out that airspace systems are extremely complicated from types of users, given the mix of commercial, GA and military traffic, and that adopting a new common standard applicable to all of these users is extremely problematic. As an example, the adoption period for ADS-B in the USA was specified as 10 years (we're about halfway through), and there is still a huge amount of resistance from the GA community due to costs of adoption etc.
bisrig
|
10 years ago
|
on: Virtual Planes, Virtual Airports: Inside the World of VATSIM
I think the author got a bit confused, as the next paragraph (quoting the VATSIM ATC) clarifies this statement... in the example given the ATC instruction requires readback. "Roger" is not what the controller is looking for in that case.