This is cool, and I agree that English urgently needs a better script. However, design goal #3 (distinct from the Latin alphabet) makes it clear that this is just a toy, or a proof of concept if you may. If you consciously design a script to be as alien as possible to the target audience (English speakers), then you simply don't want adoption.
I'm no expert, and I bet this has been explored, but I always felt that there's fantastic precedent in the world for using the Latin alphabet for writing sounds that occur in English; especially in the Scandinavian languages.
Eg the word "sauce", which consists of two identical consonants and a single vowel. Why not choose å for that vowel? Sås makes a lot of sense, and accidentally is also the Swedish word for "sauce" iirc. Or what about the two "th" sounds? Icelandic has got you covered: Instead of "this" and "thin", can't we write "ðis" and "þin"?
Note: this is not without precedent. The Norwegians totally overhauled their script until it was pretty much phonetic. This gave them fantastic benefits such as way fewer pupils struggling with dyslexia (at least, until learning Danish or English), and heartwarmingly lovely spellings of loan words such as "restaurang" and "stasjon".
It could be done.
Personally I think it's totally insane to use a phonetic script (as opposed to eg Chinese) but then not spell out the sounds. The Finns, Norwegians, Italians and Russians (and plenty others) got it right.
Eth and thorn used to be used in English anyway. They were removed to make printing easier (the 'old timey' 'ye' for the is because of a practice of printing thorn as y).
If you just want to annotate sounds, it seems like your goals overlap a lot with some uses IPA is designed for. Wiktionary gives /sɑs/ as one pronunciation of sauce.
In any case, you'd have to solve the same problems people already using IPA for English deal with. For one, you'd probably want the same spelling system to be used across all major English dialects, at least necessitating something like using https://en.wikipedia.org/wiki/Diaphoneme.
So for most speakers, different written symbols might be pronounced the same way (i.e. the map from symbol to sound is not injective, and furthermore the map changes depending on which dialect of English you speak).
Additionally, you probably can't simultaneously achieve the goals of
1. use only one character per phoneme
2. use only the 26 characters of the alphabet
If you insist on keeping property (2), some phonemes will be written using (say) two characters, but you'd have to make sure that there's no ambiguity (as there would be if there were three vowels written respectively "e", "i" and "ei").
Restricting yourself to the basic Latin alphabet is a nice feature and digraphs work fine IMO. You could easily solve the this/thin problem by spelling them "dhis" and "thin".
"Norway did not even have a revolution at the time the rest of Europe was busy figuring out human rights and stuff, because we were busy fighting over how to spell it." -- Erik Naggum
> Eg the word "sauce", which consists of two identical consonants and a single vowel. Why not choose å for that vowel? Sås makes a lot of sense
'sauce' has three consonants in my accent and no 'a' sound; it's a homonym of 'source'. Likewise, in your proposal, would 'pås' be a homograph for 'pass' and 'pause'?
"The European Commission has just announced an agreement whereby English will be the official language of the European Union rather than German, which was the other possibility.
As part of the negotiations, the British Government conceded that English spelling had some room for improvement and has accepted a 5- year phase-in plan that would become known as "Euro-English".
In the first year, "s" will replace the soft "c". Sertainly, this will make the sivil servants jump with joy. The hard "c" will be dropped in favour of "k". This should klear up konfusion, and keyboards kan have one less letter. There will be growing publik enthusiasm in the sekond year when the troublesome "ph" will be replaced with "f". This will make words like fotograf 20% shorter.
In the 3rd year, publik akseptanse of the new spelling kan be expekted to reach the stage where more komplikated changes are possible. Governments will enkourage the removal of double letters which have always ben a deterent to akurate speling. Also, al wil agre that the horibl mes of the silent "e" in the languag is disgrasful and it should go away.
By the 4th yer people wil be reseptiv to steps such as replasing "th" with "z" and "w" with "v".
During ze fifz yer, ze unesesary "o" kan be dropd from vords kontaining "ou" and after ziz fifz yer, ve vil hav a reil sensibl riten styl. Zer vil be no mor trubl or difikultis and evrivun vil find it ezi tu understand ech oza. Ze drem of a united urop vil finali kum tru.
Und efter ze fifz yer, ve vil al be speking German like zey vunted in ze forst plas.
A version of that joke has been attributed to Mark Twain, though this source[0] says it was more probably M.J. Yilz, in this[1] 1971 letter to the Economist.
> In the 3rd year, publik akseptanse of the new spelling kan be expekted to reach the stage where more komplikated changes are possible. Governments will enkourage the removal of double letters which have always ben a deterent to akurate speling. Also, al wil agre that the horibl mes of the silent "e" in the languag is disgrasful and it should go away.
At this level, it really reads like archaic English spelling before things were really standardized.
George Bernard Shaw often railed against the inconsistencies of English. Its grammar and spelling are unbelievably inconsistent and a constant headache even to native speakers. I love English and think it’s a very expressive language. I used to be an incredibly good speller (AutoCorrect has made that less important). I have a superb grasp of its grammar as both a speaker and a writer.
But boy, do I have sympathy for anyone, native speaker or not, who has to deal with these bugs. They bring a lot of grief and embarrassment, with little return for all the heartache.
Those bugs and inconsistencies are really just manifestations of the complicated history of the language. There are few languages which have as many diverse parent languages as English has.
Agreed, I read a report that children learning English have to devote far more time to learning spelling than speakers of other languages, but this comes at the cost of time spent studying other subjects. I have never seen an argument against standardizing spelling, but I guess there is no political will and academic support to do it.
As far as I'm aware, all natural languages are about equally expressive and native speakers can't really make grammatical mistakes when speaking (writing is a different topic).
I learned the Shavian alphabet a few years ago, and gradually built up a list of transliterations as I read a few ebooks partially converted over (read a bit, add some more words, repeat). I had to make the font characters too (for Deja Vu, but I didn't contribute them back as mine seemed a bit amateurish).
What made me give up is recognising just how different spellings would be in different dialects - it means you lose the whole-word recognition that lets you read quickly. Unfortunately that would likely be the case for any English spelling reform effort.
I subsequently lost most of my (NZ English) transliterations too, if I remember correctly.
"What made me give up is recognising just how different spellings would be in different dialects - it means you lose the whole-word recognition that lets you read quickly."
Losing the ability to read quickly is its big Achilles heel of Shavian and Quikscript.
This can be mitigated somewhat as you get used to spelling things mostly the same way and reading your own spellings, but when you have to read someone else's writing it's back to sounding out the words in your head, which will slow down reading speed to a crawl compared to reading traditional English orthography.
"To provide field testing of the new alphabet [Shavian], Read organized a lengthy public testing phase of Shavian by some 500 users from around the world who spoke different dialects of English. Once he had analyzed the results of those tests, Read decided to revise Shavian to incorporate a number of changes to improve the alphabet and make it both easier and faster to write. He called the revised alphabet "Quikscript"..."[1]
The main advantage of Shavian is that it is part of unicode while Quikscript isn't. I'm not sure why that's the case. I'd have expected it to be the other way around.
As much as I think English needs spelling reform, I don't think a brand new alphabet is the answer. In particular, the Shavian alphabet would be a nightmare for people like me who had trouble with reversed letters as a child. (My mother was particularly amused when I once wrote "rubberband" as rudderdanb.) Symmetry between the voiced and unvoiced letters is cool in theory but a disaster for dyslexics like me.
To go in completely the opposite direction, Ithkuil (ithkuil.net) is a constructed language whose script encodes semantic meaning rather than phonetic. A written word may be pronounced in several qualitatively different ways, as long as they have the same meaning.
The problem with making spelling more phonetic is that not everyone pronounces English the same. Even a single country like the United States or the United Kingdom counts a range of accents and dialects. A phonetic spelling is always going to be non-phonetic for some speakers.
I'm afraid that phonetic alphabets are useless for English because pronunciation is not stable [1] and because it has a lot of local variants. I suspect that English will fossilise its current orthography and turn words into logograms composed of Latin letters, similar to Chinese.
Every single living language out there has unstable pronunciation and a lot of local variants. A language is a living thing and is constantly in the flux, both through time and space.
One problem with English spelling lies in tradition, with many pronunciation changes not being followed in spelling -- it used to have quite a phonetic spelling: if you read today's English phonetically it will be quite similar to Middle English. Another problem are inconsistencies added just for the sake of it: e.g. "queen" was originally spelled "cwene" (it's an Anglo-Saxon word) but the spelling was changed according to rules of French orthography used by Norman scribes.
English could adopt phonetic variant of Latin alphabet quite easily -- there is a clearly defined set of phonemes it's using -- but it probably won't because there it is too widely spread across a large number of countries (it's an official language in 59 countries, twice as many as the second language, French, at 29 [0]), and there is no central standardisation body whose decisions might be accepted by all of them.
Except English pronunciation, just like the pronunciation of every other human language hasn't been stable for centuries and English has so far failed to "fossilize its current orthography". And there are plenty of other languages with local variants and unstable pronunciation with phonetic alphabets. Japanese has two.
The point has been made that different people pronounce English differently. I don't think that's an insurmountable obstacle: if you made a sort of union of RP and standard American pronunciation, probably the rest of the English-speaking world would be happy to use a spelling system based on that, if they were willing to accept any change at all, that is.
A more serious problem, I think, is how the pronunciation of a single word element varies between words even for a single speaker of the standard language. How would a reformed English spelling cope with the vowels in "photograph" and "photography", for example? Perhaps there's a neat solution, but I don't know what it is. At least I'm fairly sure that phonemic transcript isn't it.
Honestly I'm not sure what difference there is between "photograph" and "photography" (now I looked it up and I would never have guessed) but most languages that have sound changes like that just change the letters between variants of a word, instead of keeping the same letters and changing their pronunciation.
In Dutch, long vowels that turn into short vowels make things like meer/meren (single/plural). In German, you would have Land/Länder (single/plural as well). French also does it with say, régler/règlement (verb/substantive).
English does it too in some cases, with say wife/wives. So it could very well be something like photögraph/phötagröphy. I don't see why it wouldn't work.
The biggest drawback of a new alphabet is that we burn word shapes into our brain (we don’t read words by letter unless they are unfamiliar), so we would have to get used to a bunch of new shapes (a major disadvantage) even if pronouncing unfamiliar words was easier (a minor benefit).
As spoken language evolves we have to eventually adapt the written language. The alternative is written English. That beast managed to incorporate a whole body of words from another language without adapting the spelling of those words. It ended in the tears you get every year afresh when you try to teach kids the broken state that is English orthography.
The Shavian alphabet is attacking this from the wrong end though. For one thing, we already have a phonetic alphabet. There is no point in inventing another. The main issue though is that you want your alphabet be stable against vowel pronounciation changes. Vowel pronounciation changes quickly and carries little semantic meaning. If you try and encode that in your alphabet it's going to need an update as you move from city to city.
The abjad (consonant alphabet) approach of just leaving the vowels as an exercise to the reader may seem a little extreme. It has the distinct advantage of staying stable over centuries though.
The word mostly in here is easily an overstatement... have you ever seen how semi-illiterate write Spanish? If you do, you will understand that there are lots of ambiguities in Spanish orthography. Almost any word can be written in many different ways and be read nearly the same. Moreover, the same symbol is often used for different sounds (but not for different phonemes). For example, the "n" in "cangrejo" and "no", or the two "d" in "dado".
Turkish is also mostly "WYRIWYP". We like to say that it's pronounced as it is written. As a rule, there are no exceptions in pronunciation, but there are some exceptions to that rule. :)
Dialects within the country and in Cyprus use the "Istanbul" Turkish for formal communication, so there exists only one "correct" way to spell each word. But of course, for casual talk and texting, each dialect writes certain words the way they pronounce them.
\* One cool corollary of the WYRIWYP rule is that there's only one correct spelling of any given sequence of sounds. If you take some native Turkish speakers and have them listen to a made-up word, 99% of them will come up with the same spelling independently of each other.
I can highly recommend shorthand. I had the same desire. It's only been 2 years and some change for me, but I can write at about 80% of my typing speed last I measured it. It's wayyyy faster than my normal writing and my hand "switches gears" automatically when I start to write fast.
Here's a blog post that's unrelated, but you can see a little bit of my shorthand writing in the first photo.
Are there any examples of people trying to make a change to the English language and being successful? It seems to me that most of the changes occur naturally rather than by directed effort.
I don't believe it's possible to make a clear distinction between "naturally" and "by directed effort" in the case of language. The degree to which a person is conscious of the change they are making might be a theoretical criterion, but it would usually be impossible to apply it in practice as in most cases we don't know who the person was, let alone what their mental state was at the time. Even if we know who made a change and they are still living they usually wouldn't be able to tell us reliably what they were thinking.
There are plenty of individual English words whose spellings were deliberately changed by some person for one reason or another. Perhaps the only major change affecting a large number of words promulgated by a single person was Noah Webster's reform of American English that created most of the differences between American English and British English. There's a book called Does spelling matter? which has a lot of information about English spelling and historical reform proposals (though the book is perhaps not entirely reliable: there are some weird non-sequiturs in it that made me suspicious).
Most American (mis)spellings were a deliberate attempt to simplify spelling by people such as Benjamin Franklin. Some of them took, like dropping the u from colour, honour, valour, et cetera. Others, while not necessarily considered correct, are widely used and widely understood. "Tonite," comes to mind . . . it was even in my phone's autocorrect.
Seems a strange criteria that it should be at least 40 letters - to me it seems like the goal should be to use as few letters as possible while still achieving its other goals?
Lojban baseline was completed in 1997 so I think it's too early to tell whether that effort was completely wasted. Esperanto has had its chance but never really reached its goals I assume. But Lojban or something it might evolve into could be the language of the future.
It's not immediately obvious what the benefit is over Pitman script, which is mentioned in the article. It seems like Pitman is faster to write because the characters are simpler, and afaik Pitman had a wider adoption, I've never heard of the Shavian alphabet before.
[+] [-] skrebbel|7 years ago|reply
I'm no expert, and I bet this has been explored, but I always felt that there's fantastic precedent in the world for using the Latin alphabet for writing sounds that occur in English; especially in the Scandinavian languages.
Eg the word "sauce", which consists of two identical consonants and a single vowel. Why not choose å for that vowel? Sås makes a lot of sense, and accidentally is also the Swedish word for "sauce" iirc. Or what about the two "th" sounds? Icelandic has got you covered: Instead of "this" and "thin", can't we write "ðis" and "þin"?
Note: this is not without precedent. The Norwegians totally overhauled their script until it was pretty much phonetic. This gave them fantastic benefits such as way fewer pupils struggling with dyslexia (at least, until learning Danish or English), and heartwarmingly lovely spellings of loan words such as "restaurang" and "stasjon".
It could be done.
Personally I think it's totally insane to use a phonetic script (as opposed to eg Chinese) but then not spell out the sounds. The Finns, Norwegians, Italians and Russians (and plenty others) got it right.
[+] [-] beojan|7 years ago|reply
[+] [-] singularity2001|7 years ago|reply
Why? Some ortografic adjustments wud do.
[+] [-] zodiac|7 years ago|reply
In any case, you'd have to solve the same problems people already using IPA for English deal with. For one, you'd probably want the same spelling system to be used across all major English dialects, at least necessitating something like using https://en.wikipedia.org/wiki/Diaphoneme.
An example of a situation that shows the necessity of this: https://www.reddit.com/r/linguistics/comments/2g50oo/working... (specifically the pronunciation of "bed", "bad" and "bid" in the GA and NZE dialects)
So for most speakers, different written symbols might be pronounced the same way (i.e. the map from symbol to sound is not injective, and furthermore the map changes depending on which dialect of English you speak).
Additionally, you probably can't simultaneously achieve the goals of
1. use only one character per phoneme
2. use only the 26 characters of the alphabet
If you insist on keeping property (2), some phonemes will be written using (say) two characters, but you'd have to make sure that there's no ambiguity (as there would be if there were three vowels written respectively "e", "i" and "ei").
[+] [-] geowwy|7 years ago|reply
[+] [-] nabla9|7 years ago|reply
[+] [-] falsedan|7 years ago|reply
'sauce' has three consonants in my accent and no 'a' sound; it's a homonym of 'source'. Likewise, in your proposal, would 'pås' be a homograph for 'pass' and 'pause'?
[+] [-] 80386|7 years ago|reply
[+] [-] raverbashing|7 years ago|reply
Because å has a historical significance, using it would just be keeping with historical crud, use ø?/o/u instead.
[+] [-] cmroanirgo|7 years ago|reply
"The European Commission has just announced an agreement whereby English will be the official language of the European Union rather than German, which was the other possibility.
As part of the negotiations, the British Government conceded that English spelling had some room for improvement and has accepted a 5- year phase-in plan that would become known as "Euro-English".
In the first year, "s" will replace the soft "c". Sertainly, this will make the sivil servants jump with joy. The hard "c" will be dropped in favour of "k". This should klear up konfusion, and keyboards kan have one less letter. There will be growing publik enthusiasm in the sekond year when the troublesome "ph" will be replaced with "f". This will make words like fotograf 20% shorter.
In the 3rd year, publik akseptanse of the new spelling kan be expekted to reach the stage where more komplikated changes are possible. Governments will enkourage the removal of double letters which have always ben a deterent to akurate speling. Also, al wil agre that the horibl mes of the silent "e" in the languag is disgrasful and it should go away.
By the 4th yer people wil be reseptiv to steps such as replasing "th" with "z" and "w" with "v".
During ze fifz yer, ze unesesary "o" kan be dropd from vords kontaining "ou" and after ziz fifz yer, ve vil hav a reil sensibl riten styl. Zer vil be no mor trubl or difikultis and evrivun vil find it ezi tu understand ech oza. Ze drem of a united urop vil finali kum tru.
Und efter ze fifz yer, ve vil al be speking German like zey vunted in ze forst plas.
If zis mad you smil, pleas pas on to oza pepl."
[0] https://luminusdadon.wordpress.com/2006/05/04/how-english-be...
(I'm sure I saw this as an email well before 2006 though).
[+] [-] russellsprouts|7 years ago|reply
[0]: http://grammar.ccc.commnet.edu/grammar/twain.htm
[1]: http://www.lettersofnote.com/2012/05/iorz-feixfuli-m-j-yilz....
[+] [-] Gravityloss|7 years ago|reply
[+] [-] floren|7 years ago|reply
At this level, it really reads like archaic English spelling before things were really standardized.
[+] [-] tomcam|7 years ago|reply
But boy, do I have sympathy for anyone, native speaker or not, who has to deal with these bugs. They bring a lot of grief and embarrassment, with little return for all the heartache.
[+] [-] empath75|7 years ago|reply
[+] [-] sjf|7 years ago|reply
[+] [-] malnourish|7 years ago|reply
[+] [-] mkl|7 years ago|reply
What made me give up is recognising just how different spellings would be in different dialects - it means you lose the whole-word recognition that lets you read quickly. Unfortunately that would likely be the case for any English spelling reform effort.
I subsequently lost most of my (NZ English) transliterations too, if I remember correctly.
[+] [-] pmoriarty|7 years ago|reply
Losing the ability to read quickly is its big Achilles heel of Shavian and Quikscript.
This can be mitigated somewhat as you get used to spelling things mostly the same way and reading your own spellings, but when you have to read someone else's writing it's back to sounding out the words in your head, which will slow down reading speed to a crawl compared to reading traditional English orthography.
[+] [-] pmoriarty|7 years ago|reply
"To provide field testing of the new alphabet [Shavian], Read organized a lengthy public testing phase of Shavian by some 500 users from around the world who spoke different dialects of English. Once he had analyzed the results of those tests, Read decided to revise Shavian to incorporate a number of changes to improve the alphabet and make it both easier and faster to write. He called the revised alphabet "Quikscript"..."[1]
The main advantage of Shavian is that it is part of unicode while Quikscript isn't. I'm not sure why that's the case. I'd have expected it to be the other way around.
[1] - https://en.wikipedia.org/wiki/Quickscript
[2] - https://www.reddit.com/r/Quickscript/
[+] [-] rootbear|7 years ago|reply
[+] [-] ConceptJunkie|7 years ago|reply
[+] [-] Smaug123|7 years ago|reply
[+] [-] singularity2001|7 years ago|reply
[+] [-] xenadu02|7 years ago|reply
Such efforts wud hav the benefit of beeng immeedeeatly obveeus to existeeng speekurs
[+] [-] iafisher|7 years ago|reply
[+] [-] _nalply|7 years ago|reply
[1]: https://www.vocabulary.com/lists/432678
[+] [-] BerislavLopac|7 years ago|reply
One problem with English spelling lies in tradition, with many pronunciation changes not being followed in spelling -- it used to have quite a phonetic spelling: if you read today's English phonetically it will be quite similar to Middle English. Another problem are inconsistencies added just for the sake of it: e.g. "queen" was originally spelled "cwene" (it's an Anglo-Saxon word) but the spelling was changed according to rules of French orthography used by Norman scribes.
English could adopt phonetic variant of Latin alphabet quite easily -- there is a clearly defined set of phonemes it's using -- but it probably won't because there it is too widely spread across a large number of countries (it's an official language in 59 countries, twice as many as the second language, French, at 29 [0]), and there is no central standardisation body whose decisions might be accepted by all of them.
[0] https://en.wikipedia.org/wiki/List_of_languages_by_the_numbe...
[+] [-] yborg|7 years ago|reply
[+] [-] bloak|7 years ago|reply
A more serious problem, I think, is how the pronunciation of a single word element varies between words even for a single speaker of the standard language. How would a reformed English spelling cope with the vowels in "photograph" and "photography", for example? Perhaps there's a neat solution, but I don't know what it is. At least I'm fairly sure that phonemic transcript isn't it.
[+] [-] seszett|7 years ago|reply
In Dutch, long vowels that turn into short vowels make things like meer/meren (single/plural). In German, you would have Land/Länder (single/plural as well). French also does it with say, régler/règlement (verb/substantive).
English does it too in some cases, with say wife/wives. So it could very well be something like photögraph/phötagröphy. I don't see why it wouldn't work.
[+] [-] seanmcdirmid|7 years ago|reply
Sunk cost.
[+] [-] glibgil|7 years ago|reply
[+] [-] lolc|7 years ago|reply
The Shavian alphabet is attacking this from the wrong end though. For one thing, we already have a phonetic alphabet. There is no point in inventing another. The main issue though is that you want your alphabet be stable against vowel pronounciation changes. Vowel pronounciation changes quickly and carries little semantic meaning. If you try and encode that in your alphabet it's going to need an update as you move from city to city.
The abjad (consonant alphabet) approach of just leaving the vowels as an exercise to the reader may seem a little extreme. It has the distinct advantage of staying stable over centuries though.
[+] [-] gfiorav|7 years ago|reply
[+] [-] enriquto|7 years ago|reply
The word mostly in here is easily an overstatement... have you ever seen how semi-illiterate write Spanish? If you do, you will understand that there are lots of ambiguities in Spanish orthography. Almost any word can be written in many different ways and be read nearly the same. Moreover, the same symbol is often used for different sounds (but not for different phonemes). For example, the "n" in "cangrejo" and "no", or the two "d" in "dado".
[+] [-] oceliker|7 years ago|reply
Dialects within the country and in Cyprus use the "Istanbul" Turkish for formal communication, so there exists only one "correct" way to spell each word. But of course, for casual talk and texting, each dialect writes certain words the way they pronounce them.
\* One cool corollary of the WYRIWYP rule is that there's only one correct spelling of any given sequence of sounds. If you take some native Turkish speakers and have them listen to a made-up word, 99% of them will come up with the same spelling independently of each other.
[+] [-] lintuxvi|7 years ago|reply
[+] [-] themodelplumber|7 years ago|reply
Here's a blog post that's unrelated, but you can see a little bit of my shorthand writing in the first photo.
https://www.friendlyskies.net/intj/modeling-and-detecting
I started with Ford Shorthand (.com) and then added onto it from there. Anyway, can definitely recommend.
[+] [-] divanvisagie|7 years ago|reply
[+] [-] bloak|7 years ago|reply
There are plenty of individual English words whose spellings were deliberately changed by some person for one reason or another. Perhaps the only major change affecting a large number of words promulgated by a single person was Noah Webster's reform of American English that created most of the differences between American English and British English. There's a book called Does spelling matter? which has a lot of information about English spelling and historical reform proposals (though the book is perhaps not entirely reliable: there are some weird non-sequiturs in it that made me suspicious).
[+] [-] goodcanadian|7 years ago|reply
[+] [-] geowwy|7 years ago|reply
[+] [-] egjerlow|7 years ago|reply
[+] [-] badrabbit|7 years ago|reply
[+] [-] galaxyLogic|7 years ago|reply
Lojban baseline was completed in 1997 so I think it's too early to tell whether that effort was completely wasted. Esperanto has had its chance but never really reached its goals I assume. But Lojban or something it might evolve into could be the language of the future.
[+] [-] tmh79|7 years ago|reply
[+] [-] sjf|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]