stagehn's comments

stagehn | 5 years ago | on: When Math Gets Impossibly Hard

This is a pedantic response. There isn't an important difference between building a physical system and building a system in code whose generating process is that which you're trying to model. And everyone knew he didn't mean every outcome literally.

stagehn | 5 years ago | on: Lee Kuan Yew's Singapore

He didn't use the 'authoritarian' framing (he sarcastically questioned whether it was 'exemplary'), that's something you're presumptively attributing.

There was not 'zero' coverage of the guy's arrest, to the contrary there was wide coverage as a Google search easily reveals. There's also two factors you're not considering, (1) the guy ran a conspiracy theory group which reduces sympathy for him, (2) the woman's arrest came first so it was more novel and thus more engaging from a clicks perspective.

It's absurd that race is being brought into this as a relevant explanatory dimension. Pernicious and divisive to say the least, leaving aside the fact that there's no evidence that race is in any way relevant to either what occurred in this case or the coverage of the case

stagehn | 5 years ago | on: Lee Kuan Yew's Singapore

Is "wide support" of her arrest (as framed by the person I was replying to) an example of white lady in distress? That seems like the exact opposite of white lady in distress. Or is the single person who thought the arrest was unjust (further up) an example of white lady in distress?

None of this fits the white lady in distress definition. It just seems like a way to smuggle in anti-white racism for no particular reason. Nothing about this situation has to do with race in any way whatsoever.

stagehn | 5 years ago | on: Staff Report on Algorithmic Trading in U.S. Capital Markets [pdf]

> Consider that statement in limit; if the queue was of infinite length, there would be no trade-through, and no adverse selection.

E[PnL | is_filled_very_soon] and P(is_filled_very_soon) both decline monotonically in queue size assuming we remain at the back of the queue. E[PnL | is_filled_very_soon] starts off negative and gets more negative monotonically. P(is_filled_very_soon) starts high and asymptotes to 0 (but never reaches it assuming queue size remains finite).

I will elaborate on the mechanisms behind queues being over-sized in large tick names:

- I have estimation error in my theoretical fair value combined with an opportunity cost of pulling my order, so even if I think my order is negative EV I want to leave it in the queue unless that negative EV exceeds the opportunity cost. In small tick names I don't care about this estimation error because there's little to no opportunity cost of pulling an order, so I am much quicker to remove liquidity which incentives a sparse book.

- Even if I get filled in a large tick name there's often resting size behind me that lets me scratch out, especially if in the market I'm trading I get the ack before the print shows in the public market data (in this case I am only competing with firms that have canary orders and I'm not sure how common that is outside futures arb).

I personally see the biggest problem of large ticks being that it gives an advantage to prop firms that can pre-queue multiple levels in advance especially with GTC orders which gives them an advantage. I see the biggest problem of small ticks being that prop firms can dime genuine liquidity which is a form of front-running, again giving them a big advantage. Not sure exactly how this will net off in terms of rents collected.

> a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue. There are far bigger fish to fry in the market structure debate.

Agreed. Either way it's not very consequential. I'm 100% in favour of leaving it as-is in US equities though. I'm looking at the orderbooks now of cheap stocks with 50 bps tick sizes and the spreads are multi-ticks wide, so reducing that would achieve nothing except added complexity.

stagehn | 5 years ago | on: Staff Report on Algorithmic Trading in U.S. Capital Markets [pdf]

What's deadweight loss? Trading is zero sum dollars (gross fees) and positive sum utility. If execution flow is paying more slippage in large tick stocks then that PnL must be going to someone's pocket (market maker or short term scalper or prop firm)

I don't agree that flow execution is going to be queueing much more than hitting in larger tick stocks relative to smaller tick stocks especially in US equities where the largest tick size is 100 bps and low price stocks have larger volatility. Most of these stocks have multi tick spreads anyway so reducing the tick size does nothing but increase complexity of order management.

The bid ask volume imbalance is often skewed in these stocks and you get less slippage by just lifting the entire queue (when the BBO imabalance points in your direction) and being bid-over with half of the average queue size and being front of queue even if you have a low information order you're trying to execute.

I can actually make a theoretical argument that large tick stocks is actually better for flow execution as follows: if the tick size is sufficiently large then I can be bid-over with size and not get dimed. Queue position is mine and can't be effectively stolen by market maker algos.

I think it's mostly a myth that execution slippage is lower when queueing versus hitting for these large tick names. The average slippage from VWAP from randomly queueing in a stock with a 20 bps tick size will be about 10 bps, roughly the same as if you're just crossing the spread with a large order. But of course this slippage can be massively reduced in both the queueing and hitting execution strategies with some simple heuristics. You can get some market data and backtest this slippage yourself.

stagehn | 5 years ago | on: Staff Report on Algorithmic Trading in U.S. Capital Markets [pdf]

As an industry practitioner I disagree with this but I can see why first order thinking will lead to that conclusion.

A smaller tick size does not necessarily translate into lower transaction costs for investors. Market makers' competition for queue priority will ensure that queues are over sized with liquidity when the tick size is larger, creating more adverse selection for market makers and lower transaction costs for investors. The mechanism of action is that queues remain sized with depth even when the fair price breaches the BBO because there remains a nontrivial probability that the fair price will bounce to the other side of the BBO, returning the market maker's posted size to positive expected value territory. Market makers have an incentive to do this when the tick size is large in order to preserve time priority.

This mechanism doesn't exist in small tick stocks since there is no cost to pull an order, since time priority doesn't exist in the limiting case of infinitesimally small tick sizes.

My experience has basically confirmed this theory. I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult. In fact it's hard to imagine it any other way, market makers will compete away any difference in edge, reducing the size of the pie to near zero in both tick size categories.

My findings here only apply to tick sizes below 100 bps.

I do however agree that it helps to create a speed arms race. But I see that as offset with more simplicity of larger tick sizes, order management is much easier.

page 1