top | item 41498042

(no title)

mathteddybear | 1 year ago

No, project Bernanke wasn't about that.

At that time, the exchange was a second-price auction, and all parties could submit up to two bids (presumably, the top two bids from their own collection of advertisers). Let's call the Google bids G1 > G2.

Since Google already implemented automated bidding strategies, they would submit to this auction (1+a)G1 and (1+b)G2 for certain fixed small value parameters a,b. Project Bernanke computed on historical data the optimal values of these a,b parameters.

Cue government discovery misunderstanding documentation

discuss

order

bbor|1 year ago

Hmm, you clearly know what you’re talking about and this contradicts the available info, so thanks for sharing! I’m a little confused though. To bring it back to simple terms: you’re saying that the project was simply to start bidding more on ad space…? How exactly does that help (/“bailout”) their customers, and why would that be its own project? “Determining how much to bid for ad space” is already the job of half of Bayview campus, so I’m confused by this benign explanation.

At the very least it sounds like they were using their position as auctioneer to fine-tune their bidding strategies, which seems like a textbook example of monopolistic behavior. But even that would be a step up from what I/the article above accuse them of.

mathteddybear|1 year ago

The values of G1 and G2 are computed by a complex algorithm, however, that algorithm is agnostic of the position of the ad in the auction. Unlike the constant factors (1+a) and (1+b) applied on top of that.

Other companies in that auction could apply this kind of optimization, too. Perhaps the improvement is not as large for smaller participants, and so, not worth looking into.