I suppose you could saw that parsing any text-based protocol in general "Is a Minefield". They look so simple and "readable", which is why they're appealing initially, but parsing text always involves lots of corner-cases and I've always thought it a huge waste of resources to use text-based protocols for data that's not actually meant for human consumption the vast majority of the time.
Consider something as simple as parsing an integer in a text-based format; there may be whitespace to skip, an optional sign character, and then a loop to accumulate digits and convert them (itself a subtraction, multiply, and add), and there's still the questions of all the invalid cases and what they should do. In contrast, in a binary format, all that's required is to read the data, and the most complex thing which might be required is endianness conversion. Length-prefixed binary formats are almost trivial to parse, on par with reading a field from a struture.
> Length-prefixed binary formats are almost trivial to parse
They definitely are not, as displayed by the fact that binary lengths are the root cause of a huge number of security flaws. JSON mostly avoids that.
> the most complex thing which might be required is endianness conversion
That is a gross simplification. When you look at the details of binary representations, things get complex, and you end up with corner cases.
Let's look at floating point numbers: with a binary format you can transmit NaN, Infinity, -infinity, and -0. You can also create two NaN numbers that do not have the same binary representation. You have to choose single or double precision (maybe a benefit, not always). Etc.
Similarly in JSON integers or arrays of integers are nothing special. It is mostly a benefit not to have to specify UInt8Array.
JSON is one of many competitors within an ecology of programming, including binary formats, and yet JSON currently dominates large parts of that ecology. So far a binary format mutation hasn't beaten JSON, which is telling since binary had the early advantage (well: binary definitely wins in parts of the ecology, just as JSON wins in other parts).
This is a little too generous about the benefits of binary formats vs text formats. Ultimately, any data exchange between disparate systems is going to be a challenging task, no matter what format you choose. Both sides have to implement it in a compatible way. And ultimately, every format is a binary format. Encoding machine-level data structures direct on the wire sounds good, but it quickly gets complicated when you have to deal with multiple architectures and languages. And you don't have the benefit of the gradually accreted de-facto conventions like using UTF-8 encoding for text-based formats to fall back on, much less the ability for humans to troubleshoot by being able to read the wire protocol.
With sufficient discipline and rigor, and a good suite of tests, developed over years of practical experience, you can evolve a good general binary wire protocol, but by then it will turn out to be so complicated and heavyweight to use, that some upstart will come up with a NEW FANTASTIC format that doesn't have any of the particular annoyances of your rigorous protocol, and developers will flock to this innovative and efficient new format because it will help them get stuff done much faster, and most of them won't run into the edge cases the new format doesn't cover for years, and then some of them will write articles like this one and comments like yours and we can repeat the cycle every 10-20 years, just like we've been doing.
>Consider something as simple as parsing an integer in a text-based format; there may be whitespace to skip, an optional sign character, and then a loop to accumulate digits and convert them (itself a subtraction, multiply, and add), and there's still the questions of all the invalid cases and what they should do.
^[ *][-?][0-9][0-9]*[ *]$
You're welcome. Anything that passes that regex is a valid number. Now using that as a basis of a lexer means that you can store any int in whatever precision you feel like.
It's unfortunate that the majority of programmers these days are so computer illiterate that they can't write a parser for matching parens and call you an elitist for pointing out this is something anyone with a year of programming should be able to do in their sleep.
Because of this article (which I encountered a year ago) I would say Parsing JSON is no longer a minefield.
I had to write my own JSON parser/formatter a year ago (to support Java 1.2 - don't ask) and this article and its supporting github repo (https://github.com/nst/JSONTestSuite) was an unexpected gift from the heavens.
This might be throwing a lit match into a gasoline refinery, but why not opt for XML in some circumstances?
Between its strong schema and wsdl support for internet standards like soap web services, XML covers a lot of ground that Json encoding doesn't necessarily have without add-ons.
I say this knowing this is an unfashionable opinion and XML has its own weaknesses, but in the spirit of using web standards and LoC approved "archivable formats", IMO there is still a place for XML in many serialization strategies around the computing landscape.
Json is perfect for serializing between client and server operations or in progressive web apps running in JavaScript. It is quite serviceable in other places as well such as microservice REST APIs, but in other areas of the landscape like middleware, database record excerpts, desktop settings, data transfer files, Json is not much better or sometimes even slightly worse than XML.
XML cannot be parsed into nested maps/dictionaries/lists/arrays without guidance from a type or a restricted xml structure.
JSON can do that. It also maps pretty seamlessly to types/classes in most languages without annotations, attributes, or other serialization guides.
It also has explicit indicators for lists vs subdocuments vs values for keys, which xml does not. XML tags can repeat, can have subtags, and then there are tag attributes. A JSON document can also be a list, while XML documents must be a tree with a root document.
XML may be acceptable for documents. But seeing as how XHTML was a complete dud, I doubt it is useful even for that.
And we didn't even need to get into the needless complexity of validation, namespaces, and other junk.
Syntax aside, I think the original mistake is IDLs, schemas, and other attempts at formalism.
WSDL, SOAP, and all their precursors were attempted in spite of Postel's Law.
Repeating myself:
Back when I was doing electronic medical records, my two-person team ran circles around our (much larger) partners by abandoning the schema tool stack. We were able to detect, debug, correct interchange problems and deploy fixes in near realtime. Whereas our partners would take days.
Just "screen scrap" inbound messages, use templates to generate outbound messages.
I'd dummy up working payloads using tools like SoapUI. Convert those known good "reference" payloads into templates. (At the time, I preferred Velocity.) Version every thing. To troubleshoot, rerun the reference messages, diff the captured results. Massage until working.
Our partners, and everyone I've told since, just couldn't grok this approach. No, no, no, we need schemas, code generators, etc.
There's a separate HN post about Square using DSLs to implement OpenAPI endpoints. That's maybe 1/4th of the way to our own home made solution.
I personally like XML a lot for rich text (I like HTML better than TeX) and layout (like in JSX for React), and it's not horrible if you want a readable representation for a tree, but I can't imagine using it for any other purpose.
JSON is exactly designed for object serialization. XML can be used for that purpose but it's awkward and requires a lot of unnecessary decisions (what becomes a tag? what becomes an attribute? how do you represent null separately from the empty string?) which just have an easy answer in JSON. And I can't think of any advantage XML has to make up for that flaw. Sure, XML can have schemas, but so can JSON.
I will agree that JSON is horrible for config files for humans to edit, but XML is quite possibly even worse at that. I don't really like YAML, either. TOML isn't bad, but I actually rather like JSON5 for config files - it's very readable for everyone who can read JSON, and fixes all the design decisions making it hard for humans to read and edit.
One of the biggest advantages for XML are attributes and namespaces. I miss these in JSON.
As AtlasBarfed mentioned, JSON has a native map and list structure in its syntax, which is sorely missed in XML. You have to rely on an XML Schema to know that some tag is expected to represent a map or list.
JSON with attributes and namespaces would be my ideal world.
Once you're parsed the first minefield, another crop emerges: interpreting the result. Even the range of values seen in the wild for a supposedly simple boolean attribute is just mind-boggling. Setting aside all the noise from jokers trying it on with fuzzing engines, we'll see all of these presented to various APIs:
That last looks like a doozy, but old lags will guess what's going on right away. It's the octets of the 8-bit string "true", misinterpreted as UCS-2 (16-bit wide character) code points and then spat out as UTF8. Google translates it, quite appropriately, as "Enemy".
Oddly though, according to my records, never seen a "NULL".
On the other hand, if you only need 53 bits of your 64 bit numbers, and enjoy blowing CPU on ridiculously inefficient marshaling and unmarshaling steps, hey, it's your funeral.
JSON sucks. Maybe half our REST bugs are directly related to JSON parsing.
Is that a long or an int? Boolean or the string "true"? Does my library include undefined properties in the JSON? How should I encode and decode this binary blob?
We tried using OpenApi specs on the server and generators to build the clients. In general, the generators are buggy as hell. We eventually gave up as about 1/4 of the endpoints generated directly from our server code didn't work. One look at a spec doc will tell you the complexity is just too high.
We are moving to gRPC. It just works, and takes all the fiddling out of HTTP. It saves us from dev slap fights over stupid cruft like whether an endpoint should be PUT or POST. And saves us a massive amount of time making all those decisions.
I have had the absolute joy of working with gRPC services recently. Static schemas and built in streaming mechanics are fantastic. It definitely removes a lot of my gripes with REST endpoints by design.
Ah, so that's the source of that Chrome bug that we saw last week. Customers on Chrome for Windows (only that, not Chrome for Linux or macOS) were complaining that the search on our statically-generated documentation site was not working. The search is implemented by a JavaScript file that downloads a JSON containing a search index, and it turns out that this search index had too much nesting for Chrome on Windows's JSON parser. This would reliably produce a stack overflow:
I’ve been using to process and maintain giant JSON structures and it’s faster than any other parser I’ve tried. I was able to replace my previous batch job with this as it gives real-time performance.
> Abstract Syntax Notation One (ASN.1) is a standard interface description language for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, and especially in cryptography.
Asn.1 is incredibly hard to actually implement . There are dozens of cases of security bugs based on bad parsers. Also there are a dozen different encodings of asn.1 data including json (JER). Its age also means that it has a bunch of obsolete datatypes.
Protobuf and friends have most of the power without a lot of the drawbacks.
A lot of companies still use typed messages like Protocol Buffers which make this a signficantly smaller issue. Especially since the message format was designed to be portable.
A question: just like JSON's syntax was based on JavaScript, could we create a JSON schema syntax based on TypeScript? So the schema would essentially be a TypeScript type annotation.
Parsing is a minefield. General purpose computing systems are minefields.
Of all human readable formats I've ever worked with, only S-expressions have proven easier and safer to parse. Json.org even has unambiguous railway diagrams!
I wish EDN would catch on. The simplicity of JSON with better number handling, a few more very useful data types like namespaced keywords and sets, arbitrarily complex keys, and an even terser, more readable syntax. [k v k v] beats [k: v, k: v] hands down, and you can use whitespace commas if you want them.
In case anyone doesn't click the link, its much simpler than JSON, leaves interpretation up to the reader, supports polymorphic data, and has some tweaks to make it nicer to edit by hand. I've used this in a bunch of personal projects and maps all the models I've come across perfectly. If you need more power in your format you're better off using Lua than YAML.
I have a Rust Serde implementation 90% complete I could finish up if anyone wants it.
The bigger question is, what is there to be done? What is the road to a more uniform handling of JSON? I've handled some JSON before and it's usually fairly easy untill you catch one of these strange implementation quirks. But I'm not sure that those quirks can be ironed out at this point.
Can you help me understand the problem? These things seem like corner cases that you could just Not Do(TM) and then you don’t have to worry about it.
What am I missing, when do these gotchas become an actual problem for you as a developer?
I’m not sure I’ve used any technology that was free of footguns, and JSON appears to have fewer of them than the average programming language or library.
The product owner perspective on this should be "nothing supports anything unless it is tested".
If you pick up a standard and just assume other products will be able to work with it, you're in for a surprise. I don't care if it's TCP sockets or .ini files; if you didn't test compatibility with the product you expect will interact with yours through the standard, consider it unstable, and don't advertise support for it.
Sometimes you have to support a standard itself, like WPA2, so you implement the standard according to internet engineering best practice: be liberal with what you accept, and conservative with what you transmit (or something to that effect). Then test compatibility with the major products you know will want to use it, and fix the bugs you find.
Been of the opinion a while a lot of issues could be resolved if we agreed on a streamable binary format that had good definitions for data types (including integers and dates).
String formats are great an all for viewing in whatever text viewer but so inefficient and then you have the whole escaping string inside of strings and string encoding binary data.
If we all agreed on a binary format then there would be a viewer for it in every debugging tool.
ASN.1, Protobuf, BSON, ION. MSGPACK whatever. I would prefer a binary format that doesn't repeat keys for efficiency where the schema can be sent separately or inlined. But even one that's basically binary JSON with more types would be step up.
I'm not a professional coder. And I mostly work with tabular data, in spreadsheets and SQL. I like to get my data as delimited text files. Ideally, delimited with some character that's 100% guaranteed to never occur in the data. In my experience, "|" is often a good option, but you never know. And CSV, even with quotes, can be a nightmare, especially if the data contains addresses. Or names with quoted nicknames.
Anyway, given the choice, I always pick JSON over XML. Because with JSON, I can always identify the data blocks that I need, and parse them out with bash and spreadsheets. Not with XML, however. Just as not with HTML.
> For example, Xcode itself will crash when opening a .json file made the character [ repeated 10000 times, most probably because the JSON syntax highlighter does not implement a depth limit.
I certainly would not criticize such a thorough examination for being facile, but I do want to point out that the conclusion "But sometimes, simple specifications just mean hidden complexity" is not supported by the article. Almost all of the end cases are caused by implementors ignoring or extending the simple specification.
The crashing test cases look scary from a security perspective, specially in the C-based parsers. Does anyone know if these results are still up to date or if the bugs have already been fixed?
[+] [-] userbinator|6 years ago|reply
Consider something as simple as parsing an integer in a text-based format; there may be whitespace to skip, an optional sign character, and then a loop to accumulate digits and convert them (itself a subtraction, multiply, and add), and there's still the questions of all the invalid cases and what they should do. In contrast, in a binary format, all that's required is to read the data, and the most complex thing which might be required is endianness conversion. Length-prefixed binary formats are almost trivial to parse, on par with reading a field from a struture.
[+] [-] robocat|6 years ago|reply
> Length-prefixed binary formats are almost trivial to parse
They definitely are not, as displayed by the fact that binary lengths are the root cause of a huge number of security flaws. JSON mostly avoids that.
> the most complex thing which might be required is endianness conversion
That is a gross simplification. When you look at the details of binary representations, things get complex, and you end up with corner cases.
Let's look at floating point numbers: with a binary format you can transmit NaN, Infinity, -infinity, and -0. You can also create two NaN numbers that do not have the same binary representation. You have to choose single or double precision (maybe a benefit, not always). Etc.
Similarly in JSON integers or arrays of integers are nothing special. It is mostly a benefit not to have to specify UInt8Array.
JSON is one of many competitors within an ecology of programming, including binary formats, and yet JSON currently dominates large parts of that ecology. So far a binary format mutation hasn't beaten JSON, which is telling since binary had the early advantage (well: binary definitely wins in parts of the ecology, just as JSON wins in other parts).
[+] [-] juliusmusseau|6 years ago|reply
I'll use JavaScript numeric literals here as my translation medium (ironic!):
Norway locale parses it to: 1001
USA locale parses it to: 1.001
France locale parses it to: NaN
https://docs.oracle.com/cd/E19455-01/806-0169/overview-9/ind...
[+] [-] skywhopper|6 years ago|reply
With sufficient discipline and rigor, and a good suite of tests, developed over years of practical experience, you can evolve a good general binary wire protocol, but by then it will turn out to be so complicated and heavyweight to use, that some upstart will come up with a NEW FANTASTIC format that doesn't have any of the particular annoyances of your rigorous protocol, and developers will flock to this innovative and efficient new format because it will help them get stuff done much faster, and most of them won't run into the edge cases the new format doesn't cover for years, and then some of them will write articles like this one and comments like yours and we can repeat the cycle every 10-20 years, just like we've been doing.
[+] [-] IloveHN84|6 years ago|reply
[+] [-] pnx|6 years ago|reply
It's unfortunate that the majority of programmers these days are so computer illiterate that they can't write a parser for matching parens and call you an elitist for pointing out this is something anyone with a year of programming should be able to do in their sleep.
[+] [-] juliusmusseau|6 years ago|reply
I had to write my own JSON parser/formatter a year ago (to support Java 1.2 - don't ask) and this article and its supporting github repo (https://github.com/nst/JSONTestSuite) was an unexpected gift from the heavens.
[+] [-] Multicomp|6 years ago|reply
Between its strong schema and wsdl support for internet standards like soap web services, XML covers a lot of ground that Json encoding doesn't necessarily have without add-ons.
I say this knowing this is an unfashionable opinion and XML has its own weaknesses, but in the spirit of using web standards and LoC approved "archivable formats", IMO there is still a place for XML in many serialization strategies around the computing landscape.
Json is perfect for serializing between client and server operations or in progressive web apps running in JavaScript. It is quite serviceable in other places as well such as microservice REST APIs, but in other areas of the landscape like middleware, database record excerpts, desktop settings, data transfer files, Json is not much better or sometimes even slightly worse than XML.
[+] [-] AtlasBarfed|6 years ago|reply
JSON can do that. It also maps pretty seamlessly to types/classes in most languages without annotations, attributes, or other serialization guides.
It also has explicit indicators for lists vs subdocuments vs values for keys, which xml does not. XML tags can repeat, can have subtags, and then there are tag attributes. A JSON document can also be a list, while XML documents must be a tree with a root document.
XML may be acceptable for documents. But seeing as how XHTML was a complete dud, I doubt it is useful even for that.
And we didn't even need to get into the needless complexity of validation, namespaces, and other junk.
[+] [-] hombre_fatal|6 years ago|reply
If parsing JSON is bad, XML is a clusterfuck.
[+] [-] specialist|6 years ago|reply
WSDL, SOAP, and all their precursors were attempted in spite of Postel's Law.
Repeating myself:
Back when I was doing electronic medical records, my two-person team ran circles around our (much larger) partners by abandoning the schema tool stack. We were able to detect, debug, correct interchange problems and deploy fixes in near realtime. Whereas our partners would take days.
Just "screen scrap" inbound messages, use templates to generate outbound messages.
I'd dummy up working payloads using tools like SoapUI. Convert those known good "reference" payloads into templates. (At the time, I preferred Velocity.) Version every thing. To troubleshoot, rerun the reference messages, diff the captured results. Massage until working.
Our partners, and everyone I've told since, just couldn't grok this approach. No, no, no, we need schemas, code generators, etc.
There's a separate HN post about Square using DSLs to implement OpenAPI endpoints. That's maybe 1/4th of the way to our own home made solution.
[+] [-] Zarel|6 years ago|reply
JSON is exactly designed for object serialization. XML can be used for that purpose but it's awkward and requires a lot of unnecessary decisions (what becomes a tag? what becomes an attribute? how do you represent null separately from the empty string?) which just have an easy answer in JSON. And I can't think of any advantage XML has to make up for that flaw. Sure, XML can have schemas, but so can JSON.
I will agree that JSON is horrible for config files for humans to edit, but XML is quite possibly even worse at that. I don't really like YAML, either. TOML isn't bad, but I actually rather like JSON5 for config files - it's very readable for everyone who can read JSON, and fixes all the design decisions making it hard for humans to read and edit.
[+] [-] taftster|6 years ago|reply
As AtlasBarfed mentioned, JSON has a native map and list structure in its syntax, which is sorely missed in XML. You have to rely on an XML Schema to know that some tag is expected to represent a map or list.
JSON with attributes and namespaces would be my ideal world.
[+] [-] legulere|6 years ago|reply
[+] [-] twblalock|6 years ago|reply
[+] [-] inopinatus|6 years ago|reply
Oddly though, according to my records, never seen a "NULL".
[+] [-] umvi|6 years ago|reply
Really, the only time it would matter is if you are parsing user-provided JSON and said user was trying to exploit your parser somehow.
But 99% of the time, I'm not parsing user-provided JSON, so I don't ever encounter these corner cases and parsing/serialization works great.
[+] [-] juliusmusseau|6 years ago|reply
Consider this JSON: {"key": 9223372036854775807}. With most parsers it never fails.
But... some JSON parsers (include JS.eval) parse it to 9223372036854776000 and continue on their merry way.
The problem isn't user-provided JSON here. The problem is user-provided data (or computer-provided data) that's inside the JSON.
rachelbythebay's take (http://rachelbythebay.com/w/2019/07/21/reliability/):
On the other hand, if you only need 53 bits of your 64 bit numbers, and enjoy blowing CPU on ridiculously inefficient marshaling and unmarshaling steps, hey, it's your funeral.
[+] [-] majewsky|6 years ago|reply
I take it you've never implemented a service with a REST API.
[+] [-] nullwasamistake|6 years ago|reply
Is that a long or an int? Boolean or the string "true"? Does my library include undefined properties in the JSON? How should I encode and decode this binary blob?
We tried using OpenApi specs on the server and generators to build the clients. In general, the generators are buggy as hell. We eventually gave up as about 1/4 of the endpoints generated directly from our server code didn't work. One look at a spec doc will tell you the complexity is just too high.
We are moving to gRPC. It just works, and takes all the fiddling out of HTTP. It saves us from dev slap fights over stupid cruft like whether an endpoint should be PUT or POST. And saves us a massive amount of time making all those decisions.
[+] [-] hu3|6 years ago|reply
[+] [-] chairmanwow|6 years ago|reply
[+] [-] truth_seeker|6 years ago|reply
Ref link:- https://v8.dev/blog/v8-release-76
[+] [-] majewsky|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] iamleppert|6 years ago|reply
https://github.com/lemire/simdjson
I’ve been using to process and maintain giant JSON structures and it’s faster than any other parser I’ve tried. I was able to replace my previous batch job with this as it gives real-time performance.
[+] [-] calcifer|6 years ago|reply
[+] [-] kthejoker2|6 years ago|reply
[+] [-] carapace|6 years ago|reply
> Abstract Syntax Notation One (ASN.1) is a standard interface description language for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, and especially in cryptography.
https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
Or keep re-inventing the wheel. It's not like the people paying you will notice or care, eh?
[+] [-] nimish|6 years ago|reply
Protobuf and friends have most of the power without a lot of the drawbacks.
[+] [-] exabrial|6 years ago|reply
[+] [-] Spivak|6 years ago|reply
[+] [-] izacus|6 years ago|reply
[+] [-] jasonhansel|6 years ago|reply
[+] [-] sorokod|6 years ago|reply
[+] [-] failrate|6 years ago|reply
[+] [-] filoeleven|6 years ago|reply
https://github.com/edn-format/edn
[+] [-] rendaw|6 years ago|reply
In case anyone doesn't click the link, its much simpler than JSON, leaves interpretation up to the reader, supports polymorphic data, and has some tweaks to make it nicer to edit by hand. I've used this in a bunch of personal projects and maps all the models I've come across perfectly. If you need more power in your format you're better off using Lua than YAML.
I have a Rust Serde implementation 90% complete I could finish up if anyone wants it.
[+] [-] mkl|6 years ago|reply
This doesn't seem simpler than JSON otherwise, though, e.g. type declarations and optional quotes.
Why is leaving interpretation up to the reader desirable? Shouldn't things always come out the same?
Asterisks are an unusual choice of comment syntax. What if your comment needs to contain an asterisk? Why not "//..." and/or "/* ... */", or "#..."?
[+] [-] jasonhansel|6 years ago|reply
[+] [-] Arrezz|6 years ago|reply
[+] [-] hirundo|6 years ago|reply
Something like https://github.com/nst/JSONTestSuite? Could a parser test suite be an official component of an RFC like 8259?
[+] [-] erikpukinskis|6 years ago|reply
What am I missing, when do these gotchas become an actual problem for you as a developer?
I’m not sure I’ve used any technology that was free of footguns, and JSON appears to have fewer of them than the average programming language or library.
[+] [-] peterwwillis|6 years ago|reply
If you pick up a standard and just assume other products will be able to work with it, you're in for a surprise. I don't care if it's TCP sockets or .ini files; if you didn't test compatibility with the product you expect will interact with yours through the standard, consider it unstable, and don't advertise support for it.
Sometimes you have to support a standard itself, like WPA2, so you implement the standard according to internet engineering best practice: be liberal with what you accept, and conservative with what you transmit (or something to that effect). Then test compatibility with the major products you know will want to use it, and fix the bugs you find.
[+] [-] SigmundA|6 years ago|reply
String formats are great an all for viewing in whatever text viewer but so inefficient and then you have the whole escaping string inside of strings and string encoding binary data.
If we all agreed on a binary format then there would be a viewer for it in every debugging tool.
ASN.1, Protobuf, BSON, ION. MSGPACK whatever. I would prefer a binary format that doesn't repeat keys for efficiency where the schema can be sent separately or inlined. But even one that's basically binary JSON with more types would be step up.
[+] [-] mirimir|6 years ago|reply
Anyway, given the choice, I always pick JSON over XML. Because with JSON, I can always identify the data blocks that I need, and parse them out with bash and spreadsheets. Not with XML, however. Just as not with HTML.
[+] [-] saagarjha|6 years ago|reply
FWIW, this appears to have been fixed recently.
[+] [-] tonyedgecombe|6 years ago|reply
[+] [-] warmfuzzykitten|6 years ago|reply
[+] [-] ufo|6 years ago|reply