This problem is known as "attribution" - you have a "no" or "without" in the sentence, but you don't know where it belongs. One could (and one does) argue that the problem cannot be solved with statistical methods (ML), especially not in any domain where accuracy is required, such as medical recored analysis: "no evidence of cancer" and "evidence of no cancer" are very different things.
Zooming out, the language field breaks into several subfields:
- A large group of Chomsky followers in academia are all about logical rules but very little in the way of algorithmic applicability, or even interest in such.
- A large and well-funded group of ML practitioners, with a lot of algorithmic applicability, but arguably very shallow model of the language fails in cases like attribution. Neural networks might yet show improvement, but apparently didn't in this case.
- A small and poorly funded group of "comp ling", attempting to create formalisms (e.g. HPSG) that are still machine-verifiable, and even generative. My girlfriends is doing PhD in this area, in particular dealing with modeling WH questions, so I get some glimpse into it; it's a pity the field is not seeing more interest (and funding).
This is pointless overcomplicating. I might agree if the example would be slightly more interesting, but "without stripes" isn't even "absence of <stripes>", it is essentially a colour/pattern and can be easily attributed to a range of things exactly the same way "green" can be. Google translate correctly associates much more dubious and abstract concepts than that, and does it with statistical methods, i.e. associating word combinations with a location in a vector space. The fact all major search engines fail to do it here is just shameful. Especially Amazon, where it is pretty much a primary search function.
The point that the author is making, in a very understated way, is that all three companies have PR websites that breathlessly describe their advanced AI capabilities, yet they cannot understand a very simple query that young children can.
Google has for years put out puff pieces talking about high accuracy on image tagging. It’s only within the last few months that searching my Photos library for “cat” returned something other than pictures of my dog.
There’s a nuanced argument that practitioners know how ML is so dependent on training data and accuracy tails off sharply, but that nuance tends to removed from anything selling to potential customers — which has not been a great way to keep them in my experience.
I have noticed in the past few years google results have become noticeable worse for similar reasons. Google used to _surprise_ me with how good it was able to understand what I was really looking for even when I put in vague terms. I remember being shocked on several occasions when putting in half remembered sentences, lyrics, expressions from something I had heard years ago and it being the first! result. I almost never have this experience anymore. Instead it seems to almost always return the "dumb" result, i.e. the things I was not looking for, even trying to avoid using clever search terms. It's almost like it is only doing basic word matching or something now. Also, usually the first page is all blogspam SEO garbage now.
Google was good at launch because it was harvesting data from webrings and directories to provide it "high quality" link ranking data. However, they didn't thank or credit or share any of their revenue with the sites whose human curation helped their results become so impressive. Seeing that Google search was effective, most human curators stopped curating directories and webrings. The SEO industry picked up the slack and began curating "blogs" that are junk links to junk products. This pair of outcomes led to the gradual and ongoing decay of Google's result quality.
Google has not yet discovered how to automate "is this a quality link?" evaluation or not, since they can't tell the difference between "an amateur who's put in 20 years and just writes haphazardly" and "an SEO professional who uses Markov-generated text to juice links". They have started to select "human-curated" sources of knowledge to promote above search results, which has resulted in various instances of e.g. a political party's search results showing a parody image. They simply cannot evaluate trust without the data they initially harvested to make their billions, and without curation their algorithm will continue to fail.
Your search for "skiing Norway" mostly returns results for skiing in the French Alps, because those pages have much higher visit rates.
Google is a dumbass nowadays, and regularly ignores half your search terms to present you with absolutely irrelevant results, that have gotten lots of visits in the past.
I have also found that search results are getting frustratingly worse. Often even when I put in explicit search terms and quotes, and filter out words that I don't want, Google will return results that don't adhere to what I am looking for or just return no results at all. I remember when I would search for something and find much more relevant information. Now the first 5 or 6 search results are ad-sponsored and aren't relevant, but I have to go to the 3rd or 4th page to find something that matches. I also often have to search for things that were posted in the last year or less because the older postings are increasingly irrelevant.
Searching "men without pants" versus "men with pants" gives much better results.
This is a case where, while it makes sense to say the sentence, it's not a common use of language, and at the end of the day, the search engine will find what's written down, it's not a natural language processor yet (despite any marketing).
Shirt stores don't advertise "Shirts without stripes - 20% off", they describe them as "Solid shirts" or "Plain shirts". Men's fashion blogs talk about picking "solid shirts" or "plain shirts" for a particular look. If I walked into a clothing store and asked for "shirts without stripes", the sales person would most likely laugh and say "er, you mean you want plain shirts?".
Plain shirts/solid shorts are the most common way to refer to these, and people seem to be searching this way:
Regarding moving towards natural language processing - the "without" part is not as important as knowing the context.
My kids will ask me to get from the bakery things like "the round bread with a hole and seeds", which I know means "sesame bagel", or "the sticky bread", which means "cinnamon twists" - which I understand because I know the context. Sometimes they say "I want the red thingy", and I need to ask a bunch of questions to eventually get at what they want (sometimes it's a red sweater, sometimes it's cranberry juice).
Unless Google starts asking questions back, I don't think there is any way it can give you what you want right away.
Vaguely similar to a joke from Ninotchka that Zizek often uses about the difference between 'coffee without cream' and 'coffee without milk'. He usually uses it to reference the concept of negation in the Hegelian dialectic, but he's also mentioned the difficulty of computers understanding negation in the context of the coffee/cream example.
Why should it not be possible to solve this with statistical methods? The model just needs to be able to understand the important meaning of "no" in here, in the context of the whole sentence. I would guess that most modern NNs from the NLP area (Transformer or LSTM) would be able to correctly differentiate the meaning. The problem is, I think there is no fancy NN (yet) behind Google search, and the other web searches.
To extend on that, you can think of the human brain as just another (powerful) statistical model.
"there is no evidence of cancer" and "there is evidence of no cancer" are two different statements with different meaning, so it's more complex a task than just understanding the importance of "no" in a sentence. It's involves semantic analysis of the sentence. The paper I linked to below describes a technique they call "deep parsing." Check it out for more context.
> I think there is no fancy NN (yet) behind Google search,
During the deep learning boom, Google made a huge push towards NN-based NLP. SEO's and their PR calls their efforts collectively RankBrain: https://en.wikipedia.org/wiki/RankBrain
I think we are on the cusp of combining symbolical/logical operations over the vectors produced by Neural Networks (or at least, major effort there). Could be by neatly tying up all these different NN-based NLP modules (parsing, semantic distance, knowledge bases, ...) with another set of decision layers stacked on top.
This very question was the subject of a lengthy debate between Norvig and Chomsky. I won't rehash it here, but here's a glimpse:
Chomsky: Statistical analysis of snowflakes falling outside the window may predict the next snowflake, but it will do very little for weather prediction, and nothing for climate analysis.
Norvig: Give us enough data and we will get close enough for all practical purposes.
- Shirt Without Stripes: shirts where the description contains both "without" and "stripes". Example: a shirt without collar, with stripes.
- "Shirt Without Stripes": a mess, with and without stripes, suggesting an unusual search query. In fact, the linked article site is the first result in web search.
- Stripeless shirt: sexy women in strapless shirts
- "stripeless shirt": pictures of Invader Zim...
- "stripeless" shirt: mostly shirts without stripes, but there are some shirts with stripes that are described as stripeless...
The last one may give us a hint at the problem. If you have to mention a shirt is without stipes, you are probably comparing is to a shirt with stripes. For example imagine a forum, some guy is posting a picture of a shirt with stripes, I can expect some people to ask questions like "do they sell this shirt without stripes"? Or maybe the seller himself may have a something like "shirt without stripes available here (link)" in the description. So the search engines tie "shirt without stripes" to pictures of shirts with stripes.
I remember an incident where searching for "jew" on Google led to antisemitic websites. The reason was simply that that exact word was rarely used in other contexts. Mainstream and Jewish source tend to use the words "jews" and "jewish" but not "jew". And because Google doesn't look at the dictionary meanings of words but rather what people use them for, you get issues like that.
> The reason was simply that that exact word was rarely used in other contexts
I had a similar problem when I was trying to convince a friend that homeopathy was a complete and utter fraud with absolutely no basis in reality. She was convinced that the internet's overwhelming consensus was that homeopathy was valuable and regular doctors were control-freaks who make things up when they don't know the answers.
To prove her point, she did an internet search for allopathic medicine and showed me how the majority of the results were negative.
To me, the most interesting implication here is that this must not adversely affect Google's ad revenue. If it did, they would surely fix it. This, in turn, means that apparently we have been trained to interface with search engines such that this is not a problem.
Sometimes I wonder how much my brain has changed to use search engines / how much of it is dedicated to effective googling. Makes me feel like a cyborg.
That sounds like an ad business version of the efficient market theory. E.g, that can't possibly be a hundred dollar bill on the ground, because if it was someone else would surely already have picked it up by now.
I think you're overestimating Google's sophistication.
The funny thing is that this search is pretty easy to put into first order logic (shirt(x) & ~striped(x)). I guess we now have computers that are bad at logic.
This is something that has annoyed me since the Altavista times. I want to search for "madonna but not the singer", and find pictures of the holy icon. I can do "madonna -singer", but that fails if the page mentions the word "singer" a single time. Even if it is "This is a page about madonna statues, but not about the famous singer."
It would be great if I could add negative keywords to a website, or mark text as "don't index" or "index with a negative weight". But probably, people would game this in ways I can't imagine.
There is probably a clever ML solution for this, like having meaning-vectors for distinct ideas, and pushing pages that are close to one meaning away from the other meaning. Classification is easy if you have a keywords like "painting" and "catholic", but if it is "virgin" or "prayer" then it could be either meaning, so there is never a bullet-proof solution.
I have a PhD in NLP (which is what we often call it on the CS side, but is almost synonymous with CL="computational linguistics" on the cognitive/linguistics side of the field). I remember a talk at our annual conference, well-attended, perhaps around 2003 or so. The speaker was from one of the labs that was really leaning into "big data", which was only just becoming possible at that point, and argued persuasively that we should all just throw out our parsers and formalisms—ditch the computational linguistics side, basically—because we were on the edge of functionally infinite (unsupervised) data, and supervised and partially supervised systems would never ever be able to keep up. He presented performance numbers and how the unsupervised systems needed a lot more data to compete with the supervised systems, but that data was available, and he threw more and more and more data at the system and it got better and better. (I no longer remember the specific task he was using to illustrate his point.)
There were gasps in the room and a kind of depressed acquiescence: geez, he might be right. And the pendulum indeed swung in that direction, hard, and the field has been overwhelmingly dominated by the statistical machine learning folks on the CS side of the field, while the linguists kind of quietly keep the flame alive in their corner.
But I thought then, and I still think now, that it really just was another swing of the pendulum (which has gone back and forth a few times since the birth of the field in the 1960s). Perhaps it's now time again for someone to ring up the linguists and let them apply their expertise again?
If you select "I don't like this recommendation" for a video on youtube, you will get to provide feedback on why you did so: either "I don't like this video" or "I've already watched this video." I've pressed the latter on literally thousands of videos at this point, and after well over a year of this, YouTube still hasn't figured out that I don't want to be recommended videos that I've already watched.
Likewise, Google says I should log into their website for personalized search results, but after years of always clicking on Python 3 results over Python 2.7 results, it never learned to show me the correct result.
Eventually I realized that personalized recommendations are more or less just a thin cover for collecting vast amounts of data with no benefit to the consumer. I believe we have the technology to do better, but we don't use it. In fact, we seem to be using it less and less.
My experience has been that most ads that "follow me around" peddle the exact same thing I purchased most recently. No, I will not buy a second electric kettle again, let alone the exact make and model that I now own. I'd rather have generic ads, so I might discover some new product that I could (at least in theory) actually buy.
I love this. It is such an easy to grasp example of what is "wrong" with search. Historically, searching was keyword based so documents with "shirt" and "stripes" would rank highly, even though none of those pages had the keyword "without".
As humans we know immediately that the search is for documents about shirts where stripes are not present. But the term 'without' doesn't make it through to the term compositor step which is feeding terms in a binary relationship. We might make such a relationship as
Q = "shirt" AND NOT "stripes"
You could onebox it (the Google term for a search short circuit path that recognizes the query pattern and some some specific action, for example calculations are a onebox) and then you get a box of shirts with no stripes and an bunch of query results with.
You can n-gram it, by ranking the without-stripes n-gram higher than the individual terms, but that doesn't help all that much because the English language documents don't call them "shirts without stripes", generally they are referred to as "plain shirts" or "solid shirts" (plain-shirt(s) and solid-shirt(s) respectively). But you might do okay punning without-stripes => plain or to solid.
From a query perspective you get better accuracy with the query "shirts -stripes". This algorithmic query uses unary minus to indicate a term that should not be on the document but it isn't very friendly to non-engineer searchers.
Finally you can build a punning database, which is often done with misspellings like "britney spears" (ok so I'm dating my tenure with that :-)) which takes construction terms like "without", "with", "except", "exactly" and creates an algorithmic query that is most like the original by simple substitution. This would map "<term> without <term>" => "<term> -<term>". The risk there is that "doctors without borders" might not return the organization on the first page (compare results from "doctors without borders" and "doctors -borders", ouch!)
When people get sucked into search it is this kind of problem that they spend a lot of time and debate on :-)
Perhaps, but would you really say "Hi, I'm wearing a shirt without stripes"?
It's a completely artificial construct. Simply the fact that this hacker-news entry is the #1 search result shows that real human people do not perform this search in significant quantity. But we can quantify that with data to backup the assumption [1][2]. When people want to buy a shirt without stripes, they do not describe the shirt by what it doesn't have.
In fact, it's trivial to cherry pick a random selection of words that on the face of it sounds like something a human might search for, but it turns out never occurs in practice. Add to that the fact that the term is being searched without quotes [3], which results in the negation not actually being attached to anything.
Do you go to a store to buy it along with your Pants Without Suspenders, Socks Without Animal Print, and other items defined purely by what they don't have?
Is it just me or does it feel like in the last couple years all of these companies have had the quality of their search go down? I've noticed large portions of my search will go ignored and it will just grab the most popular terms in my search rather than searching all terms.
This is also confusing what you search for vs. what the vendor thinks you will buy. Product catalog searches often intentionally return items outside your search parameters.
I would never search for something this way. If I wanted to find a 4WD car, I wouldn't search for "cars without 2WD."
Likewise, here, I would search for solid-colored shirts.
And these services are limited to the content/terminology utilized by the cataloged sites/products.
If I am selling a "black shirt" or a "solid black shirt," it is not google's job to catalog it as a "shirt without stripes," unless I advertise it as a "black shirt without stripes."
I would use natural language to test a services' NLP ability.
But it what if you just hate shirts with stripes but do like polka dots or other patterns? You can do a fancy advanced search query with OR and EXCLUDE tricks but that is not what this post is trying to emphasize.
We're a company coming out of the YC W20 batch working on the product attribution problem http://glisten.ai/.
There's too many products nowadays to be manually attributed (e.g. pattern=stripes), making it hard return good results even with entity resolution for queries. We train classifiers to categorize products, including what something is not, using their images and descriptions.
Google Photo's search is a similar source of amusement for me. While it's quite good, it also fails fairly regularly and sometimes amusingly. For me "turtle" includes understandable mistakes like fish, a snail, and a rock that does look a bit like a turtle. However "turtle" also includes this, a picture of sequined slippers reflecting light?! https://i.imgur.com/4aSlA4B.jpg
I'm guessing one of those reflections looks like a turtle? Or maybe a pattern on the floor, wall, or rug?
Although there are examples where I'm unsure if the AI is dumber than my 4yo or smarter than me. This is a result for "truck": https://i.imgur.com/JcgXZAG.jpg
Even (especially?) my 4yo knows those are Brio trains, not trucks. However, trains have components called trucks! https://en.wikipedia.org/wiki/Steam_locomotive_components I'm unsure whether or not any of the wheel assemblies on these toy trains are considered trucks, so either the AI is extremely smart or slightly dumber than a 4yo.
Although search within apps can be even worse.
I was looking through Google Movies the other day for the film 2001 and instead it swamps the results with those from the year 2001 - one could argue that there are lots of people who are massively keen to buy films based on the year of production, but I suspect it's better to satisfy those looking for years in the title first and then after that brief interruption list the year based results, rather than the other way around.
Similarly looking for "The Book of Why" on Audible is dismal: even when it's in quotes it isn't until the 42 result that the exact match shows up, with a load of useless not-obviously connected results turning up first.
Both these failures interest me as they have a tangible financial implication (I clearly had money to spend) and yet they remain unfixed.
Proof that SELECT with GROUP BY doesn't work if your tags aren't correct.
Joking aside, it doesn't surprise me that this isn't being picked up — aren't most of these AI teams more R&D than actual public-facing? Maybe I'm just cynical though.
This contrasts with my query of "guys in jean jumpers singing too ra loo ra loo" a few years back, which Google correctly identified as "Come on Eileen" by Dexys Midnight Runners. To this day my favourite search experience.
If it were butter, you'd want an unstriped shirt. If it were provolone, you'd want a non-striped shirt. But because it's neither of those, I think you just want a "shirt" or maybe, a "plain shirt". Indeed, I get much better results with either of the latter two search terms. There's no need to mention stripes at all, since no pattern is the default state, isn't it?
because google does understand that no and without are interchangeable. But, understandably, it does not correlate "shirt without stripes" as being the same thing as "solid-colored shirts." Why, because no one advertises or describes a solid-colored shirt as a shirt without stripes and no one searches that way. It's an irrelevant point, in my opinion.
Only slightly related but a couple of years back I got an alexa as a gift. When you open the alexa app, they had the option to add list of todos as a reminder. The first thing I did is to say something like - Alexa, add a reminder to get milk and eggs and paper. The app literally added a single item like this - milkANDeggsANDpaper.
Every once in a while I try the voice recognition by trying to speak normally to it. Normally, as is saying things like: "please set a reminder for five... umm... no I mean 6 o'clock".
Normal humans do this all the time, and if I can't do it speaking to it becomes incredibly frustrating to the point that I never want to do it again. I don't want to plan ahead what to say before I say it.
Granted, it's been a couple of years since I last tried so maybe they're better now.
Larry: "I saw a red Lamborghini in the parking lot!"
Most people will assume Lisa is driving a red Lamborghini and back from Vacation, meanwhile, all the bots are searching for Lamborghini vacations and trying to figure out what's going on in the conversation.
So basically the AI doesn't convert "without x" to "-x" even though the basic capability needed is there. This is why AI is a hard problem, especially when it meets the real world.
It's 2020 and we're still quibbling about the terminology used in SQL, what did we expect?
It's not enough to say "Oh, we should add a rule that 'without' means negate the next word" because that only applies to this one situation, in this one language. Let's generalize the problem: We aren't correctly translating from English (or other spoken languages) to Computer/Logic.
The state of the art in machine translation (from what I've read at least) is translating from language-A to a language-less "concept space" and then from there to language-B. Could that be done where the output language is something a search engine can use to find what you want correctly?
Given that pattern, I suspect we could see much better results in cases like this.
I think that this is actually really encouraging in showing that we still have a ways to go in improving search engines. A lot of people treat search engines as a solved problem, at least for non-question answering aspects.
Google has gotten significantly _less_ useful for finding technical information over the last decade or so. It used to be the case that when searching for some tech-related item (say, how to use functions in bash), the results would take you to someone's personal website or a wiki. Ostensibly, the more people linked to it and the more people clicked on it in the results, the better it ranked for that query and similar queries.
Today, entering in any tech-related query at all takes you to StackOverflow, end of story. Not only are SO answers quite often outdated (or even terrible advice in general), most of the time I'm not looking for a "here's how you do X", I'm looking for background information on a topic.
Most non-tech queries I put into google are even _more_ useless as the results tend to fall into these categories:
* Wikipedia (okay for _very_ general things, useless for domain-specific knowledge)
* SEO-enhanced blogspam, (a.k.a. "8 Weird Ways to Earn Millions Through Gaming The System!")
* Tweets on twitter (!)
The dev/tech industry desperately needs a search engine that somehow prioritizes _quality_ content, not one-off answers, blogspam, and tweets.
Problem here is not about negation, but there is no product that's described as "shirt without stripes". Stripes and shirt will come together in a different sense, since Google cannot find whole phrase it has to find parts. For example check for "shirt without shoulders"
Humans can kind of make some assumptions based on context, but it's really just a poorly defined, vague query.
What if you walked into a store and asked an associate for a shirt without stripes? What would you get?
Probably some further questions for clarification. What about checked shirts? Floral prints? Plaid? Do you want no pattern at all? T-shirt? Polo shirt? Dress shirt?
Granted, the AI results are particularly bad because they give you the one thing that you specifically didn't ask for, but that's also the only information you provided. Defining a query in terms of what you don't instead of what you do isn't going to go well.
What if you went to google and said "Show me all the webpages that aren't about elephants"? Sure, you'd get something, but would it be anything useful?
This is a good example of the bar HNers must have these days when they bafflingly assert that Google is somehow getting worse from what they remember.
Google has gotten better, it's just HNer expectations that have changed as they expect more and more magic.
For example, the subtitle on the repo is "Stupid AI" when this query has never worked in these search engines, and it won't anytime soon.
You'd think the technical HN crowd would be more advanced than to make the same mistakes that (they complain that) stakeholders/users/gamers make when they mistakenly think everything is much easier than it actually is. Things aren't "stupid" just because they can't yet read your mind.
That darn conceptual search sure is hard :) The technical approach to achieving this involves a sentence embedding that then uses vector search to match documents based on a distance metric like cosine similarity. If you encode a description of a shirt in an embedding trained on all shopping item descriptions, it should match up with the search query. The trick is in getting a sentence embedding from a short query to match a longer description in a document description - long summaries of text in embeddings tends to average too much and cloud meaning. The other problem is including the vector search feature without screwing up other searches.
On a meta note, I am a bit tired of HN submissions being used more as "Writing Prompts" rather than as links to substantive material.
This thread is an excellent example. The author of the linked page didn't have the decency to actually make a substantive point, instead sharing three screenshots and posting the link here, chumming the HN waters with the kind of stuff that brings in the sharks from far and wide.
Bashing on big cos: Check
Vague pronouncements about AI: Check
Generic side-swipes about 'ad revenue': Check
This is why a coherent thesis is required to even initiate a proper discussion, because in the absence of that it invariably devolves to lowest-common-denominator shit-flinging.
Here's another fun fact about how commerce search engines work (I spent a couple of years on this):
Negations sidestep almost all of the algorithms that try to provide an improved result set, and fall through to pure text relevancy. So try searching on amazon for shirt, then search for: shirt -xkxkxkxk. Since xkxkxkxk doesn't match any documents, the negation should have no effect, but it does, the effect it has is to sidestep all the fancy relevancy work and hardcoded query rewrite rules, domcat rules, demand and sales/impression statistics etcetc, and give you basically awful search results. You don't even get shirts.
I'm actually not sure I expect this much from a search engine. Typically there is going to be a useful word to describe what you want without having to hope it can understand "no" or "without" (for example, without stripes -> "solid" or even "NOT striped" in many cases).
Anyone with a programming background knows there is an art to forming useful search queries--it is an acquired skill. I'd personally much rather the engine bring back predictable results given mundane rules and keywords than attempt to understand sentences using an opaque method of understanding.
Since there is no context is provided, I do not expect it to understand prepositions itself.
Given exact query to human, they create environment thus context themselves.
It may also depend on whom you are asking to. For example, myself, entering this site to find out news about software & tech.
Also since 'Stripe' is a company name, I assumed link will get the list of shirt shops who do not accept Stripe as a payment method/provider. (Thus some kind of protest related thing)
I literally thought about that yesterday and did not see the page thinking "That's too much for tonight".
To be fair, the only thing that Google needs to do internally is to match this query to “shirt -stripe” and then you’ll get the necessary answer. The bigger question is why they are not doing that.
"Plain shirt" works a charm though. What is a 'shirt without stripes' anyway? That could be a shirt with diamonds? Or a plain one? Or a Hawaiian shirt?
A good demonstration of the linguistic fact that far from being meaningless, prepositions (adpositions, more generally) are actually highly consequential for meaning and are highly ambiguous between different meanings. Here's a paper that'll give you a good appreciation of this from an NLP perspective if you're curious: https://www.aclweb.org/anthology/W16-1712.pdf
I believe the future of AI, as showcased by this simple usecase, is not one central AI such as Google search engine recognizing the context, but rather each of us having a "smart assistant" with a personalized, trained understanding of the contexts that we mean.
And it's only that smart assistant that automates coping with the deficiencies of a one-size-fits-all central solution, finding me shirts with no stripes by using a rather dumb search engine. (Or "a pizza I would like", etc.)
I'm kinda late to this conversation but there are companies and Engineers trying to solve this problem basically adding more "semantics" to visual content. Good place to start is with this blog from Pinterest.
As others have pointed out, most search engines don't support natural language search in general, let alone natural language negation in particular.
There are several reasons for this, including the following:
1) Natural language understanding for search has gotten a lot better, but it is still not as robust as keyword matching. The upside of delighting some users with natural language understanding doesn't yet justify the downside of making the experience worse for everyone else.
2) Most users today don't use natural language search queries. That is surely a chicken-and-egg problem: perhaps users would love to use natural language search if it worked as well or better than keyword search. But that's where we are today. So, until there's a breakthrough, most search engine developers see more incremental gain from optimizing some form of keyword search than from trying to support natural language search.
3) Even if the search engine understands the search query perfectly, it still has to match that interpretation against the documentation representation. In general, it's a lot easier to understand a query like "shirt with stripes" than to reliably know which of the shirts in the catalog do or don't have stripes. No one has perfectly clean, complete, or consistent data. We need not just query understanding, but item understanding too.
4) Negation is especially hard. A search index tends to focus on including accurate content rather than exhaustive content. That makes it impossible to distinguish negation from not knowing. It's the classic problem of absence of evidence is not being evidence of absence. This is also a problem for keyword and boolean search -- negating a word generally won't negate synonyms or other variations of that word.
5) The people maintaining search indexes and searchers co-evolve to address -- or at least work around -- many of these issues. For example, most shoppers don't search for a "dress without sleeves"; they search for a "sleeveless dress". Everyone is motivated to drive towards a shared vocabulary, and that at least addresses the common cases.
None of this is to say that we shouldn't be striving to improve the way people and search engines communicate. But I'm not convinced that an example like this one sheds much light on the problem.
I think, looking at shirt without stripes and shirt with out stripes in Google images, that without is decompounded, which then ends up giving you shirt with stripes, however the slight difference between the two searches "shirt without stripes" and "shirt with out stripes" is that the there are some exact hits mixed in also, so there are some results for "shirt without stripes" mixed in with the decompounded query.
Apparently, this kinda works in Thai language too(and I think other language also) The search keyword is "เสื้อไม่มีแถบ" which is literally translated as 'Shirt without stripes'. It's common words to speak, unlike 'without' in English.
The result, of course, show shirt with some kind of stripe, albeit not prominent like the English one.
The latest embeddings/networks like BERT can handle encoding this logic. They take the surrounding words in context when they're encoded.
Google can do this now, for example in a prototype. The tough thing is to get it to consumer-grade quality without messing up other searches. The QA process is utterly brutal because one weird search can be a scandal.
On a positive note Google used to have trouble with a query like "words with q without u", now the top 5 pages at least all show the correct results, eg:
https://word.tips/words-with/q/without/u/
Since I was a teenager, if someone energetically asserts a statement is “true” or “false,” I drop the true or false and evaluate the statement. In essence, their only communication to me is, ‘I think this is important!’ Often, why they think it’s important is more pressing than whether the statement is true.
I wonder if this is a need for humans need to learn search queries. "-stripes" instead of "without stripes".
Or does input need to have basic filters applied before handing to ML? "without X" or "no X" = "-X"? Can be foiled with "shirt without having stripes".
I think that query analysis in terms of volume of actual people using this query will show that very little people if any actually type "shirt without stripes". Once enough people do it, feedback is accumulated that results are bad (by CTR analysis), and results will auto-correct.
In search we know it’s easy to cherry pick queries and criticize any search engine. A search engine is optimizing for billions of queries. Most of which are on the long tail.
The real question is “shirts without stripes” really a query people enter? Or representative of a real pattern in the data?
I wonder why this problem hasn't been resolved yet, considering we had NLP systems capable of this for a decade now.
Maybe it's too hard to scale to production. Or Pagerank is still better most of the time. Or plain old monopoly and risk aversion.
Some comments were deferred for faster rendering.
DenisM|5 years ago
Zooming out, the language field breaks into several subfields:
- A large group of Chomsky followers in academia are all about logical rules but very little in the way of algorithmic applicability, or even interest in such.
- A large and well-funded group of ML practitioners, with a lot of algorithmic applicability, but arguably very shallow model of the language fails in cases like attribution. Neural networks might yet show improvement, but apparently didn't in this case.
- A small and poorly funded group of "comp ling", attempting to create formalisms (e.g. HPSG) that are still machine-verifiable, and even generative. My girlfriends is doing PhD in this area, in particular dealing with modeling WH questions, so I get some glimpse into it; it's a pity the field is not seeing more interest (and funding).
alberth|5 years ago
https://www.google.com/search?q=%22Shirt+without+Stripes%22&
krick|5 years ago
Aardwolf|5 years ago
Yes, the English grammatical rules make it unambiguous where it belongs. This is solvable.
rgovostes|5 years ago
Alex3917|5 years ago
apocalypstyx|5 years ago
acdha|5 years ago
There’s a nuanced argument that practitioners know how ML is so dependent on training data and accuracy tails off sharply, but that nuance tends to removed from anything selling to potential customers — which has not been a great way to keep them in my experience.
seiferteric|5 years ago
floatingatoll|5 years ago
Google has not yet discovered how to automate "is this a quality link?" evaluation or not, since they can't tell the difference between "an amateur who's put in 20 years and just writes haphazardly" and "an SEO professional who uses Markov-generated text to juice links". They have started to select "human-curated" sources of knowledge to promote above search results, which has resulted in various instances of e.g. a political party's search results showing a parody image. They simply cannot evaluate trust without the data they initially harvested to make their billions, and without curation their algorithm will continue to fail.
nikanj|5 years ago
Google is a dumbass nowadays, and regularly ignores half your search terms to present you with absolutely irrelevant results, that have gotten lots of visits in the past.
sonar_un|5 years ago
transreal|5 years ago
This is a case where, while it makes sense to say the sentence, it's not a common use of language, and at the end of the day, the search engine will find what's written down, it's not a natural language processor yet (despite any marketing).
Shirt stores don't advertise "Shirts without stripes - 20% off", they describe them as "Solid shirts" or "Plain shirts". Men's fashion blogs talk about picking "solid shirts" or "plain shirts" for a particular look. If I walked into a clothing store and asked for "shirts without stripes", the sales person would most likely laugh and say "er, you mean you want plain shirts?".
Plain shirts/solid shorts are the most common way to refer to these, and people seem to be searching this way:
https://trends.google.com/trends/explore?date=all&q=solid%20...
Regarding moving towards natural language processing - the "without" part is not as important as knowing the context.
My kids will ask me to get from the bakery things like "the round bread with a hole and seeds", which I know means "sesame bagel", or "the sticky bread", which means "cinnamon twists" - which I understand because I know the context. Sometimes they say "I want the red thingy", and I need to ask a bunch of questions to eventually get at what they want (sometimes it's a red sweater, sometimes it's cranberry juice).
Unless Google starts asking questions back, I don't think there is any way it can give you what you want right away.
pbhjpbhj|5 years ago
Searching "pants" only shows me "trousers", that's a big fail for Google IMO, I'm accessing google.co.uk.
dkdbejwi383|5 years ago
wkyle|5 years ago
The joke from Zizek: https://www.youtube.com/watch?v=wmJVsaxoQSw
albertzeyer|5 years ago
To extend on that, you can think of the human brain as just another (powerful) statistical model.
animalCrax0rz|5 years ago
BickNowstrom|5 years ago
Doing just that for 10 years, beating hand-coded systems: https://www-nlp.stanford.edu/pubs/SocherLinNgManning_ICML201... [pdf]
> I would guess that most modern NNs from the NLP area (Transformer or LSTM) would be able to correctly differentiate the meaning.
Yes. See demos like: https://demo.allennlp.org/constituency-parsing/MTczNjYyNA== and https://demo.allennlp.org/dependency-parsing/MTczNjYyNg==
> I think there is no fancy NN (yet) behind Google search,
During the deep learning boom, Google made a huge push towards NN-based NLP. SEO's and their PR calls their efforts collectively RankBrain: https://en.wikipedia.org/wiki/RankBrain
I think we are on the cusp of combining symbolical/logical operations over the vectors produced by Neural Networks (or at least, major effort there). Could be by neatly tying up all these different NN-based NLP modules (parsing, semantic distance, knowledge bases, ...) with another set of decision layers stacked on top.
DenisM|5 years ago
Chomsky: Statistical analysis of snowflakes falling outside the window may predict the next snowflake, but it will do very little for weather prediction, and nothing for climate analysis.
Norvig: Give us enough data and we will get close enough for all practical purposes.
caust1c|5 years ago
gumby|5 years ago
GuB-42|5 years ago
- Shirt Without Stripes: shirts where the description contains both "without" and "stripes". Example: a shirt without collar, with stripes.
- "Shirt Without Stripes": a mess, with and without stripes, suggesting an unusual search query. In fact, the linked article site is the first result in web search.
- Stripeless shirt: sexy women in strapless shirts
- "stripeless shirt": pictures of Invader Zim...
- "stripeless" shirt: mostly shirts without stripes, but there are some shirts with stripes that are described as stripeless...
The last one may give us a hint at the problem. If you have to mention a shirt is without stipes, you are probably comparing is to a shirt with stripes. For example imagine a forum, some guy is posting a picture of a shirt with stripes, I can expect some people to ask questions like "do they sell this shirt without stripes"? Or maybe the seller himself may have a something like "shirt without stripes available here (link)" in the description. So the search engines tie "shirt without stripes" to pictures of shirts with stripes.
I remember an incident where searching for "jew" on Google led to antisemitic websites. The reason was simply that that exact word was rarely used in other contexts. Mainstream and Jewish source tend to use the words "jews" and "jewish" but not "jew". And because Google doesn't look at the dictionary meanings of words but rather what people use them for, you get issues like that.
knodi123|5 years ago
I had a similar problem when I was trying to convince a friend that homeopathy was a complete and utter fraud with absolutely no basis in reality. She was convinced that the internet's overwhelming consensus was that homeopathy was valuable and regular doctors were control-freaks who make things up when they don't know the answers.
To prove her point, she did an internet search for allopathic medicine and showed me how the majority of the results were negative.
https://en.wikipedia.org/wiki/Allopathic_medicine
Just a humorous anecdote, not trying to start any conversations about the relative value of different medical paradigms.
bentona|5 years ago
Sometimes I wonder how much my brain has changed to use search engines / how much of it is dedicated to effective googling. Makes me feel like a cyborg.
Saaster|5 years ago
I think you're overestimating Google's sophistication.
burger_moon|5 years ago
Knowing how the machine will interpret humans is just as important to finding your results.
Tade0|5 years ago
"Humans usually don't intuitively understand the word 'no'. Please imagine a non-pink elephant."
davesque|5 years ago
BaronSamedi|5 years ago
captainmuon|5 years ago
It would be great if I could add negative keywords to a website, or mark text as "don't index" or "index with a negative weight". But probably, people would game this in ways I can't imagine.
There is probably a clever ML solution for this, like having meaning-vectors for distinct ideas, and pushing pages that are close to one meaning away from the other meaning. Classification is easy if you have a keywords like "painting" and "catholic", but if it is "virgin" or "prayer" then it could be either meaning, so there is never a bullet-proof solution.
cscurmudgeon|5 years ago
The theme of this talk was how they did a study that showed prepositions and articles do have meaning. A big deal was made out of the results.
I think things like this happens when people consider engineering approximations such as bag of words to be the truth over time.
blahedo|5 years ago
There were gasps in the room and a kind of depressed acquiescence: geez, he might be right. And the pendulum indeed swung in that direction, hard, and the field has been overwhelmingly dominated by the statistical machine learning folks on the CS side of the field, while the linguists kind of quietly keep the flame alive in their corner.
But I thought then, and I still think now, that it really just was another swing of the pendulum (which has gone back and forth a few times since the birth of the field in the 1960s). Perhaps it's now time again for someone to ring up the linguists and let them apply their expertise again?
c3534l|5 years ago
Likewise, Google says I should log into their website for personalized search results, but after years of always clicking on Python 3 results over Python 2.7 results, it never learned to show me the correct result.
Eventually I realized that personalized recommendations are more or less just a thin cover for collecting vast amounts of data with no benefit to the consumer. I believe we have the technology to do better, but we don't use it. In fact, we seem to be using it less and less.
disqard|5 years ago
ChuckMcM|5 years ago
As humans we know immediately that the search is for documents about shirts where stripes are not present. But the term 'without' doesn't make it through to the term compositor step which is feeding terms in a binary relationship. We might make such a relationship as
Q = "shirt" AND NOT "stripes"
You could onebox it (the Google term for a search short circuit path that recognizes the query pattern and some some specific action, for example calculations are a onebox) and then you get a box of shirts with no stripes and an bunch of query results with.
You can n-gram it, by ranking the without-stripes n-gram higher than the individual terms, but that doesn't help all that much because the English language documents don't call them "shirts without stripes", generally they are referred to as "plain shirts" or "solid shirts" (plain-shirt(s) and solid-shirt(s) respectively). But you might do okay punning without-stripes => plain or to solid.
From a query perspective you get better accuracy with the query "shirts -stripes". This algorithmic query uses unary minus to indicate a term that should not be on the document but it isn't very friendly to non-engineer searchers.
Finally you can build a punning database, which is often done with misspellings like "britney spears" (ok so I'm dating my tenure with that :-)) which takes construction terms like "without", "with", "except", "exactly" and creates an algorithmic query that is most like the original by simple substitution. This would map "<term> without <term>" => "<term> -<term>". The risk there is that "doctors without borders" might not return the organization on the first page (compare results from "doctors without borders" and "doctors -borders", ouch!)
When people get sucked into search it is this kind of problem that they spend a lot of time and debate on :-)
ggggtez|5 years ago
It's a completely artificial construct. Simply the fact that this hacker-news entry is the #1 search result shows that real human people do not perform this search in significant quantity. But we can quantify that with data to backup the assumption [1][2]. When people want to buy a shirt without stripes, they do not describe the shirt by what it doesn't have.
In fact, it's trivial to cherry pick a random selection of words that on the face of it sounds like something a human might search for, but it turns out never occurs in practice. Add to that the fact that the term is being searched without quotes [3], which results in the negation not actually being attached to anything.
Do you go to a store to buy it along with your Pants Without Suspenders, Socks Without Animal Print, and other items defined purely by what they don't have?
[1] https://trends.google.com/trends/explore?geo=US&q=%22white%2... [2] https://trends.google.com/trends/explore?geo=US&q=%22plain%2... [3] https://trends.google.com/trends/explore?geo=US&q=plain%20sh...
VohuMana|5 years ago
rbetts|5 years ago
shanecleveland|5 years ago
Likewise, here, I would search for solid-colored shirts.
And these services are limited to the content/terminology utilized by the cataloged sites/products.
If I am selling a "black shirt" or a "solid black shirt," it is not google's job to catalog it as a "shirt without stripes," unless I advertise it as a "black shirt without stripes."
I would use natural language to test a services' NLP ability.
jorvi|5 years ago
wooders|5 years ago
There's too many products nowadays to be manually attributed (e.g. pattern=stripes), making it hard return good results even with entity resolution for queries. We train classifiers to categorize products, including what something is not, using their images and descriptions.
schmichael|5 years ago
I'm guessing one of those reflections looks like a turtle? Or maybe a pattern on the floor, wall, or rug?
Although there are examples where I'm unsure if the AI is dumber than my 4yo or smarter than me. This is a result for "truck": https://i.imgur.com/JcgXZAG.jpg
Even (especially?) my 4yo knows those are Brio trains, not trucks. However, trains have components called trucks! https://en.wikipedia.org/wiki/Steam_locomotive_components I'm unsure whether or not any of the wheel assemblies on these toy trains are considered trucks, so either the AI is extremely smart or slightly dumber than a 4yo.
dicytea|5 years ago
antman|5 years ago
nmstoker|5 years ago
joshmn|5 years ago
Joking aside, it doesn't surprise me that this isn't being picked up — aren't most of these AI teams more R&D than actual public-facing? Maybe I'm just cynical though.
dEnigma|5 years ago
js2|5 years ago
lifeisstillgood|5 years ago
'shirt no stripes'
on Google returned this web page at top of the organic results.
So at some point, searching for a shirt online will involve this conversation. Even more confusing.
(Although I expect my filter bubble will play a part in that)
shanecleveland|5 years ago
throw345hn|5 years ago
After that I facepalmed myself and turned it off.
lokedhs|5 years ago
Normal humans do this all the time, and if I can't do it speaking to it becomes incredibly frustrating to the point that I never want to do it again. I don't want to plan ahead what to say before I say it.
Granted, it's been a couple of years since I last tried so maybe they're better now.
kator|5 years ago
Larry: "I saw a red Lamborghini in the parking lot!"
Most people will assume Lisa is driving a red Lamborghini and back from Vacation, meanwhile, all the bots are searching for Lamborghini vacations and trying to figure out what's going on in the conversation.
partomniscient|5 years ago
"shirts -stripes" results: https://www.amazon.com/s?k=shirts+-stripes&ref=nb_sb_noss_2
So basically the AI doesn't convert "without x" to "-x" even though the basic capability needed is there. This is why AI is a hard problem, especially when it meets the real world.
It's 2020 and we're still quibbling about the terminology used in SQL, what did we expect?
mabbo|5 years ago
The state of the art in machine translation (from what I've read at least) is translating from language-A to a language-less "concept space" and then from there to language-B. Could that be done where the output language is something a search engine can use to find what you want correctly?
Given that pattern, I suspect we could see much better results in cases like this.
slaymaker1907|5 years ago
bityard|5 years ago
Today, entering in any tech-related query at all takes you to StackOverflow, end of story. Not only are SO answers quite often outdated (or even terrible advice in general), most of the time I'm not looking for a "here's how you do X", I'm looking for background information on a topic.
Most non-tech queries I put into google are even _more_ useless as the results tend to fall into these categories:
The dev/tech industry desperately needs a search engine that somehow prioritizes _quality_ content, not one-off answers, blogspam, and tweets.alanbernstein|5 years ago
varelaz|5 years ago
pugworthy|5 years ago
"birds without flight"
"cars without wheels"
"cats without tails"
"dogs without hair"
"intersections without lights"
"poems without rhyme"
"shirts without collars" (also "sleeves", "shoulders", "buttons", "logos", "pockets", and more)
imgabe|5 years ago
What if you walked into a store and asked an associate for a shirt without stripes? What would you get?
Probably some further questions for clarification. What about checked shirts? Floral prints? Plaid? Do you want no pattern at all? T-shirt? Polo shirt? Dress shirt?
Granted, the AI results are particularly bad because they give you the one thing that you specifically didn't ask for, but that's also the only information you provided. Defining a query in terms of what you don't instead of what you do isn't going to go well.
What if you went to google and said "Show me all the webpages that aren't about elephants"? Sure, you'd get something, but would it be anything useful?
hombre_fatal|5 years ago
Google has gotten better, it's just HNer expectations that have changed as they expect more and more magic.
For example, the subtitle on the repo is "Stupid AI" when this query has never worked in these search engines, and it won't anytime soon.
You'd think the technical HN crowd would be more advanced than to make the same mistakes that (they complain that) stakeholders/users/gamers make when they mistakenly think everything is much easier than it actually is. Things aren't "stupid" just because they can't yet read your mind.
rjurney|5 years ago
kinkrtyavimoodh|5 years ago
This thread is an excellent example. The author of the linked page didn't have the decency to actually make a substantive point, instead sharing three screenshots and posting the link here, chumming the HN waters with the kind of stuff that brings in the sharks from far and wide.
Bashing on big cos: Check
Vague pronouncements about AI: Check
Generic side-swipes about 'ad revenue': Check
This is why a coherent thesis is required to even initiate a proper discussion, because in the absence of that it invariably devolves to lowest-common-denominator shit-flinging.
ltbarcly3|5 years ago
Negations sidestep almost all of the algorithms that try to provide an improved result set, and fall through to pure text relevancy. So try searching on amazon for shirt, then search for: shirt -xkxkxkxk. Since xkxkxkxk doesn't match any documents, the negation should have no effect, but it does, the effect it has is to sidestep all the fancy relevancy work and hardcoded query rewrite rules, domcat rules, demand and sales/impression statistics etcetc, and give you basically awful search results. You don't even get shirts.
twodave|5 years ago
Anyone with a programming background knows there is an art to forming useful search queries--it is an acquired skill. I'd personally much rather the engine bring back predictable results given mundane rules and keywords than attempt to understand sentences using an opaque method of understanding.
pvtmert|5 years ago
Given exact query to human, they create environment thus context themselves.
It may also depend on whom you are asking to. For example, myself, entering this site to find out news about software & tech. Also since 'Stripe' is a company name, I assumed link will get the list of shirt shops who do not accept Stripe as a payment method/provider. (Thus some kind of protest related thing)
I literally thought about that yesterday and did not see the page thinking "That's too much for tonight".
Now seeing topic is somewhat very different.
civil_engineer|5 years ago
thedeviantdev|5 years ago
The former returns lots of mixed race couples, mostly not white couples. However the latter returns black couples.
What is going on here? Similar phenomenon perhaps?
heavenlyblue|5 years ago
quickthrower2|5 years ago
What is the expected result, can we agree?
lgessler|5 years ago
V-2|5 years ago
And it's only that smart assistant that automates coping with the deficiencies of a one-size-fits-all central solution, finding me shirts with no stripes by using a rather dumb search engine. (Or "a pizza I would like", etc.)
dk8996|5 years ago
https://medium.com/pinterest-engineering/pinsage-a-new-graph...
dailypeeker|5 years ago
obarthelemy|5 years ago
"Don't think of a cow !"
What did you just think of ? A cow, of cowrse.
If you want a shirt w/o stripes, just google "plain shirt" or "dress shirt -stripes.
dtunkelang|5 years ago
There are several reasons for this, including the following:
1) Natural language understanding for search has gotten a lot better, but it is still not as robust as keyword matching. The upside of delighting some users with natural language understanding doesn't yet justify the downside of making the experience worse for everyone else.
2) Most users today don't use natural language search queries. That is surely a chicken-and-egg problem: perhaps users would love to use natural language search if it worked as well or better than keyword search. But that's where we are today. So, until there's a breakthrough, most search engine developers see more incremental gain from optimizing some form of keyword search than from trying to support natural language search.
3) Even if the search engine understands the search query perfectly, it still has to match that interpretation against the documentation representation. In general, it's a lot easier to understand a query like "shirt with stripes" than to reliably know which of the shirts in the catalog do or don't have stripes. No one has perfectly clean, complete, or consistent data. We need not just query understanding, but item understanding too.
4) Negation is especially hard. A search index tends to focus on including accurate content rather than exhaustive content. That makes it impossible to distinguish negation from not knowing. It's the classic problem of absence of evidence is not being evidence of absence. This is also a problem for keyword and boolean search -- negating a word generally won't negate synonyms or other variations of that word.
5) The people maintaining search indexes and searchers co-evolve to address -- or at least work around -- many of these issues. For example, most shoppers don't search for a "dress without sleeves"; they search for a "sleeveless dress". Everyone is motivated to drive towards a shared vocabulary, and that at least addresses the common cases.
None of this is to say that we shouldn't be striving to improve the way people and search engines communicate. But I'm not convinced that an example like this one sheds much light on the problem.
If you're curious to learn more about query understanding, I suggest you check out https://queryunderstanding.com/introduction-c98740502103
bryanrasmussen|5 years ago
Just my theory.
holdenc137|5 years ago
jpswade|5 years ago
Nobody would describe a plain shirt as a shirt without stripes unless it’s within that context.
rubatuga|5 years ago
bryanrasmussen|5 years ago
https://www.amazon.com/s?k=plain+shirt
on edit: https://www.google.com/search?q=shirt+-stripes
neycoda|5 years ago
unknown|5 years ago
[deleted]
ddebernardy|5 years ago
raindropm|5 years ago
The result, of course, show shirt with some kind of stripe, albeit not prominent like the English one.
vekker|5 years ago
rjurney|5 years ago
Google can do this now, for example in a prototype. The tough thing is to get it to consumer-grade quality without messing up other searches. The QA process is utterly brutal because one weird search can be a scandal.
wordabby|5 years ago
unknown|5 years ago
[deleted]
CPLX|5 years ago
aj7|5 years ago
adamredwoods|5 years ago
Or does input need to have basic filters applied before handing to ML? "without X" or "no X" = "-X"? Can be foiled with "shirt without having stripes".
leonardopucci|5 years ago
aaron695|5 years ago
But make a script that scrapes the top X results for these sites. Get your own AI / humans to rate it.
Make it competitive for these large sites <==> give them an incentive.
softwaredoug|5 years ago
The real question is “shirts without stripes” really a query people enter? Or representative of a real pattern in the data?
mirimir|5 years ago
So it's not such a big deal that negation doesn't work.
Also, "shirts -stripes" does seem to work in both Amazon and Google. Or at least, I see no striped shirts.
moultano|5 years ago
> in particular, it shows clear insensitivity to the contextual impacts of negation.
cfv|5 years ago
As in, "X without Y" sounds like a common enough use case to have it's own little parser branch in places as big as Google or Amazon
realo|5 years ago
https://www.amazon.ca/s?k=shirt+without+stripes&ref=nb_sb_no...
carapace|5 years ago
Fishbone - Drunk Skitzo https://youtu.be/SaPGH4Yd_zc?t=231
(Apologies for the snarky low-content flip reply.)
arnaudsm|5 years ago