pdrayton's comments

pdrayton | 1 month ago | on: Reliable 25 Gigabit Ethernet via Thunderbolt

Oddly enough, that’s exactly what I’ve been benchmarking - different ways of linking Strix Halo machines - with respect to throughput & latency.

Posted a little bit re: the TB side of things on the Framework and Level1Techs forums but haven’t pulled everything together yet because the higher-speed Ethernet and Infiniband data is still being collected.

So far my observations re: TB is that, on Strix Halo specifically, while latency can be excellent there seem to be some limits on throughput. My tests cap out at ~11Gbps unidir (Tx|Rx), ~22Gbps bidi (Tx+Rx). Which is wierd because the USB4 ports are advertised at 40Gbps bidi, the links report as 2x20Gbs, and are stable with no errors/flapping - so not a cabling problem.

The issue seems rather specific to TB networking on Strix Halo using the USB4 links between machines.

Emphasis to exclude common exceptions - other platforms eg Intel users getting well over 20Gbps; other mini PCs eg MS-1 Max USB4v2; local network eg I’ve measured loopback >100Gbps; or external storage where folk are seeing 18Gbps+ / numbers that align with their devices.

Emd goal is to get hard data on all reasonably achievable link types. Already have data on TB & lower-speed Ethernet (switched & P2P), currently doing setup & tuning on some Mellanox cards to collect data for higher-speed Ethernet and IB. P2P-only for now; 100GbE switching is becoming mainstream but IB switches are still rather nutty.

Happy to collaborate with any other folk interested in this topic. Reach out to (username at pm dot me).

pdrayton | 1 month ago | on: Reliable 25 Gigabit Ethernet via Thunderbolt

I’d heard similar complaints re: TB networking latency & jitter. Did some investigations and tuning on a pair of machines with USB4 ports connected via short TB5-rated cables. Eventually got the thunderbolt links to consistently beat the ether ones on both latency and jitter. And not just switched Ethernet either - even a direct Ethernet P2P link lost out to TB, though the difference there was small.

pdrayton | 9 years ago | on: Microsoft REST API Guidelines

One slight downside of a custom header to specify a version is that OPTIONS calls don't include the value of the custom header, so your pre-flight gets to say yes or no without knowing what version is being called. Putting API version in the URL or query string fixes this.

As for bookmarking a GET request, this is /almost/ doable even following the MSFT guidelines since it says that service implementors MUST also provide a query-string alternative to required custom headers (section 7.8), and that service implementors MUST support explicit versioning. The only fly in this ointment is that the versioning part of the spec only offers two places of specifying the version - URL and query string, and seems to leaves no room for other options.

Personally, I think the Accept header flavor with custom MIME types is the most flexible for minor (backwards compatible) version - see GitHub's API for an example - but it certainly isn't the most simple to work with, neither in client libraries, Curl/WGet command-line use or API consumer tools (almost none let you fiddle with Accept headers). Since API ease of use is such a big factor for adoption, passing versions in the URL or the query string is most likely an OK lowest common denominator for APIs that seek the widest possible reach.

pdrayton | 9 years ago | on: Microsoft REST API Guidelines

The problem with just exposing '?productName=cupcakes' is you're assuming just one filter with simple equality. If you need to expose something even just a little more complex, i.e. "A=1 or B=2", or something other than equality like "A>1" or "B!=2", then you quickly find yourself re-implementing a little expression language, syntax for literals, etc. It is a slippery slope, which one can happily go down and succeed with a custom solution - until you then want someone other than your clients to pull data from. Then they need to build their filter in your language, which is of course different than the next guy's language, and so on and so on.

The fix for this is OData. Not all of OData - just a little bit. It lets one standardize the filter expression syntax (as much of it as you choose to support) without making any requirements on the backend.

I'm personally confident in making the claim that OData $filter doesn't require you to bind to a server implementation, because I work on a service built in Node.js and deployed on Linux in Azure that uses OData as a filter syntax and satisfies it's data requirements from three completely different backend servers, none of which is a SQL Server (not that you mentioned SQL, but it's often cited as "all that OData is good for"). One of them is Elastic Search, BTW :), the other is a proprietary aggregated metrics store, and the third is a cloud-scale columnar data store. All three of these can be queried with the same syntax, from tons of off-the-shelf OData consumers, and for clients the choice of what backend to pull data from is literally the only thing they change. From a business value perspective this is pure win, and this is due in large part to using just a little OData, at just the right spots.

I think of OData like salt in a recipe - a little bit is great; too much ruins the dish. Moderation in all things... :)

pdrayton | 9 years ago | on: Microsoft REST API Guidelines

OData is a big enough tent these days that there are good and bad (relative) bits even inside OData. Fortunately one gets to pick-and-choose what bits you use, so just avoid the bad bits.

The example cited as a bad url makes use of OData functions and parameters, which is definitely a more esoteric part of the spec and has spotty implementation (if at all) amongst OData consumers - so discouraging this kind of API seems perfectly reasonable for a REST-centric guideline.

OTOH the OData query syntax is IMO a lot more reasonable; outside of structured queries built in the form of URI path fragments, if you want to provide a generic query mechanism on top of a store you need some kind of generic query language. $filter is a reasonable such language - it is easy to parse, easy to emit, and relatively easy to read. Yes it has some gaps and a couple bizarre edge cases, but they don't get in the way of mainline consumption scenarios - and it's hard to beat being able to provide a reasonable REST API that clients can construct queries by hand for, and also have these same APIs "just work" when plugged into OData consumers (of which there are quite a few in the enterprise).

page 1