They seem to deal pretty well with confusing sentences containing typos, bad grammar or semi-nonsense. A “large language model cat made by microsoft” doesn’t mean anything but “large language model chat…” does, especially since Microsoft already tried this with Tay previously and that’ll turn up in its training data. Maybe they have retrained it lately (I guess you could tell by asking it in a completely new chat whether Microsoft has a chatbot and what it’s called?), but I still think it could absolutely make a correct guess/association here from what you gave it. I’m actually really impressed by how they infer meaning from non-literal sentences, like one with Bing where the user only said “that tripped the filters, try again” and Bing knew that that means to replace swear words.
neom|3 years ago