I sure hope this catches on, but we should all be aware of the hurdles:
- Little incentive for researchers to do this beyond their own good will.
- Most ML researchers are bad writers, and it's unlikely that the editing team will do the work needed (which is often a larger reorganization of a paper and ideas) to improve clarity.
- Producing great writing and clear, interactive figures, and managing an ongoing github repo require nontrivial amounts of extra time, and researchers already have strained time budgets.
- It requires you to learn git, front-end web design, random javascript libraries (I for one think d3 is a nuisance), exacerbating the time suck on tangents to research.
Maybe you could convince researchers to contribute with prizes that aligned with their university's goals. Just spitballing here, but maybe for each "top paper" award, get a team together to further clarify the ideas for a public audience, collaborate with the university and their department and some pop-science writers, and get some serious publicity beyond academic circles. If that doesn't convince a university administration that the work is worth the lower publication count, what will?
In the worst case it'll be the miserable graduate students' jobs to implement all these publication efforts, and they won't be able to spend time learning how to do research.
You're absolutely right that this is a lot of work, and not many ML researchers have all the skills needed for it.
In the short term, Distill's editorial assistance will help authors produce outstanding papers, although they need to be willing to work as well.
In the longer-term, I'd like to explore match making between data visualization people who would like to get into machine learning and machine learning researchers publishing papers.
And in the very long term, I think the right solution is to add a new component to the research ecosystem. Just like we we have people who specialize as research engineers, theoreticians, and experimentalists, I'd like to have a respected "research distiller" specialization. Eventually, I'd like to try and start special grants for research groups to have someone focused on this.
I'm a junior faculty working in ML with no personal knowledge of web development, d3, etc. While the papers currently on Distill are absolutely gorgeous and will be an invaluable tool for learning advanced ML concepts, I simply cannot see myself or my students putting the time to actually create something like that.
Unless a student is especially adept at the specific tools needed to create these and especially enthusiastic at using them, I will actively discourage them from doing it. The time needed is simply not worth it right now.
I would be happy and grateful if tools for creating these articles become easier to learn and use eventually, such that even the lower-budget, time-constrained researchers could afford to create them.
i disagree with the first point. I'm working on a distill article with Chris and Shan, and the major draw for this has been impact. It seems very plausible that an article on distill has the potential to reach a far broader (and different) audience than a paper in even a top tier mathematical journal like SIAM would.
I won't deny the time commitment needed for a distill article is not trivial - it is far more work than a technical blog. But in terms of a pure tradeoff of time per publication, the calculus makes sense. Most of the work of research distillation and synthesis is already part of the research process, and writing a distill article is just a matter of putting it all of down on paper. Doing research is a far more time consuming and less predictable process.
Good points. We do believe that well-written articles save readers time on the other end, which hopefully will offset some (if not all) of the cost of producing them. We also believe that taking the time to edit your ideas not only helps your audience but helps your own thinking. Outsourcing the work to others would most likely just lead to adding a veneer to an article rather than a substantive improvement. Instead of outsourcing we're thinking about how to foster collaborations in the future.
I think you have emphasized the main point: a lot of work for a low reward. Research is more above exploring the state of the art and new venues, divulgation and graphics is more akin to book sellers (for example Nielsen open science, and other interesting books, but for young researcher the most important and rewarding goal is to publish.
Well, now you need a Distill WYSIWYG, to make it usable (for most of the intended audience).
Hey let's be honest, most academics (that I know) still don't even use LaTeX (or refuse to do so). This is really cool, but requires way too many skills (in js/css3/html5/distill-extensions and node.js).
Personally, my team and I had really great experience with sharelatex.com, whom only I had knowledge about LaTeX. I liked that it's also opensource with a permissive license. I would rather host that on sandstorm.io the next time, or just pay for the comfort offered by overleaf.com (I've never seen such a beautiful colloborative LaTeX Editor).
You're right, i've been myself using git, github, keynote, ffmpeg, medium, JS, python, d3 and others to build blog post.
I clearly don't expect people to do that much. I can only do that because i'm coming from web development, and very nice tools started to appear recently.
People in research needs a design framework like a set of templates for keynotes/PPT/JS/CSS (think about how much traction got bootstrap). Distill is doing an awesome jobs at showing the example of what you could do.
Maybe Distill could open-source the templates they use to build those blog post?
Your criticism is spot on. If something like Distill existed for my own research area I would applaud it, but probably not use it because of time constraints.
On the other hand, being able to write well and to create good interactive illustrations are valuable skills. Maybe we could incorporate these things into seminars or otherwise crowdsource the creation of e.g. individual figures?
I'm not in academia, but I guess the impact (citations) you could get with a distill-like paper will be higher than the ones you get on a traditional paper-based journal.
As I said in Rob's thingy, I hope you get the tenure committees and job committees, because they don't have to respect it but they're the ones you have to get to respect
Thank you for this effort. I'm a fan of your blog articles. A question regarding Distill: is it a journal like conventional journal to target new research? Or it is a journal for educational articles to explain old researches better?
I hope to contribute to an effort to better explain deep learning. I don't know if that is what distill is looking for?
I've been trying to read more primary source information, sort of as my own way of combatting "fake news" but before that term was coined. There's a learning curve to it, but I've found that reading S1 filings and Quarterly Earnings Reports can be more enlightening than reading a news article on any given company. Likewise, reading research papers on biology and deep learning is significantly more valuable than reading articles or educational content on those topics.
As you'd imagine though, it's really hard. Reading a two page research paper is a very different experience from reading a NYTimes or WSJ article. The information density is enormous, the vocabulary is very domain specific, and it can take days or weeks of re-reading and looking up terms to finally understand a paper.
I'm really excited about Distill, there's a lot of value in making research papers more accessible and interesting. I've noticed that the ML/AI field has been very pioneering about research publication process, some papers are now published with source code on GitHub and the authors answering questions on r/machinelearning. This seems like a really great next step, I hope other fields of science will break away from traditional journals and do the same.
I don't want to undermine visualizations, they are awesome, but one of the big problems I see with ML research is the lack of re-produceability. I know that Google, Facebook and some others already share associated source repos, but it should almost be mandatory when working with public benchmark datasets. Source + Docker Images would be even better.
I worked in clinical research in a past life and studies would be highly discounted if they couldn't be reproduced. A highly detailed methods section was key. Many ML papers I see tend to have incredibly formalized LaTeX+Greek obsessed methods section, but far short of anything to allow reproduction. Some ML papers, i swear must have run their parameter searches a 1000 times to overfit and magically achieve 99% AUC.
Worse, I actually have tons of spare GPU farm capacity i'd love to devote to re-producing research, tweaking, trying it on adjacent datasets, etc. But the effort to re-produce is too high for most papers.
It is also disappointing to see various input datasets strewn about individuals' personal homepages, and sometimes end up broken. Sometimes the "original" dataset is in a pickled form after having already gone through multiple upstream transformations. I hope Distill can instill some good best practices to the community.
I think that having a venue that can publish non-traditional academic artifacts is an important step for reproducibility, even if it isn't our focus.
It seems clear to me that the future will involve some kind of linking reproducibility to papers. If we want to find that future, we need a way for people to experiment with what a publication is.
The announcements and About page indicate an emphasis on visuals and presentation, which I apprI've. But when I think of "modern machine learning," I think of open-source and reproducibility (e.g. Jupyter notebooks).
Will the papers published on Distill maintain transparency of the statistical process?
I see in the submission notes that articles are required to be a public GitHub repo, which is a positive indicator. Although the actual code itself does not seem to be a requirement.
I totally agree that this is very important. While it isn't currently our primary focus, having a publishing platform that can accommodate a variety of content types (including code and data) feels like a step in the right direction.
As a developer with a weaker background in mathematics, I face a language barrier with many modern algorithms. After lots of research I can understand and explain them in code, but I have no idea what your artistic-looking MathXML means.
Visualizations or algorithms described using code are much, much easier for me to understand and serve as a great starting point for unpacking the math explanations.
I understand where you're coming from and you raise a valid point, but the ML/AI is heavily academic and oriented around research. The target audience is people with a very strong math background and the necessary context.
I would recommend picking up a book on Comp Sci or algorithms, even just a cursory reading helps a lot. CS is very much not just programming and it is heavily restricted by descriptions through code.
Is there any concern about a web-native journal being less "future-proof"? I've come across quite a few interactive learning demonstrations in Flash/Java that no longer work.
This is a high-priority for us. By focusing on web-standards and avoiding proprietary plugins we're pretty confident that the content will be future-proof.
This is great but it would have been even better if Distill was designed to play well with the current system. Vast majority of researchers are focused on publishing at various conferences with strict deadlines. Even if they had all the skillsets and time to produce these beautiful illustrations, I highly doubt this will change.
Also, it is very likely that veterans in the field might think of this format as too verbose and too sugar coated, more appropriate for less math-savvy users and therefore not mainstream. Furthermore, I really feel TeX is irreplaceable unless you got all of its feature covered. All of the historic effort to replace TeX - even with bells and whistles of WYSIWYG editors - in research has failed and its important to learn from those failures. You will be surprised how many researchers insist on printing out the paper for reading even when they have access to tablets and PC.
Instead of being another peer reviewed journal, Distill could act as the following:
- platform to publish supplemental material and code
- platform to manage communication/issues post publication
- platform for readers to invite other readers for peer review and generate "front page" based on some sort of reviewer trust relationship.
- platform to host Python and MatLab code with web frontends without researchers having to learn new developer skills
- support pdf submissions but without all the eliteness of arxiv and using algorithms to create the "front page" based on some sort of peer reviewer rankings.
Above features are indeed sorely missing and Distill has good opportunity to become an "add-on" to current academic publishing systems as opposed to another peer reviewed journal.
This is really exciting! Chris et al: have you guys seen Keras.js (https://github.com/transcranial/keras-js)? It could probably be useful for certain interactive visualizations or papers.
How does this provide IF ratings? Probably irrelevant for industry, but publishing in academia is all about IF, no matter how bad and corrupt one might think it is.
And what about long-term stability/presence. Most top journals and their publishing houses (NPG, Elsevier, Springer) are likely to hang around for another decade (or two...), while I don't feel so sure about that for a product like GitHub. Maybe Distill is/will be officially backed (financially) by the industry names supporting it?
That being said, I'd love seeing this succeed, but there seems much to be done to get this really "off the ground" beyond being a (much?!) nicer GitXiv.
Our present JIF is undefined because we haven't existed for two years yet.
If you just apply the formulas anyways, you'll get an JIF of (6 citations)/(4 publications) = 1.5. Again, this number is really pessimistic because those publications are only a few months old and haven't had time to accumulate citations.
> And what about long-term stability/presence.
We aren't particularly tied to github besides it being convenient. Even if the journal died, keeping it up indefinitely would be very cheap.
More than that, we're looking into joining projects like LOCKSS to ensure preservation of the academic record.
> but there seems much to be done to get this really "off the ground" beyond being a (much?!) nicer GitXiv.
We've actually done a lot of the logistics needed to legitimize a journal. We've registered as a journal with the library of congress, joined CrossRef, and built infrastructure to integrate our metadata with the library system.
Of course, there's a lot more to do. But the biggest thing is to just publish great content and run Distill as a serious, high-quality venue.
While this is very nice, I'm a bit confused about the target. What kind of material is intended to be published here in the future?
Because the blog post and title seems to be describing it as a "journal" intended to replace PDF publications, but the actual content appears to be more in the tutorial/survey category, e.g. "how to use t-SNE," etc. Is this intended to be a place to publish new research in the future, or is it meant more for enhanced "medium"-style blog posts?
Both are fine, I just find the dissonance between the announcement and the actual content a bit confusing.
I feel like science publication in general could benefit from disruption of the publishing model. I'm not sure that the toolkit that Distill has provided is quite enough to totally change the paradigm, and it currently restricted to only one field.
I like the idea of having research being approachable for the non-scientist, and the more important question of whether there is a more efficient form (in terms of communicating new science between scientists) for research papers to take.
Is there any relevant work along this vector of thought that I should check out? Because I would really love to do some work on this.
Would saving jupyter notebooks as .html work?
PS: I have published in all of top-4 tier ML conferences but sk at html/css/js. What is my pathway to distill now?
I, like every other researcher worth her/his name in salt is always running behind clock when it comes to deadlines and lit to review. So, yeah? Coaxing myself into investing time for css/html/js in lieu of picking up more math tools seems criminal to me. Am I alone in this ?
I am a UI-developer who has been wanting to learn ML forever.
I started working on
1. fast.ai
2. think bayes
3. UW data science @ scale w/ coursera
4. udacity car nano degree
I'm going to write some articles about what I learn and hopefully move into the ML field as a data engineer in 6 months. I figure I got into my current job with a visual portfolio of nicely designed css/js demos, maybe the same thing will work for AI.
Yes. Everything is published under Creative Commons Attribution.
(One of the members of our steering committee, Michael Nielsen, has a significant history advocating for open science. I think there's about a snowball's chance in hell he'd be involved if we weren't. :P )
[+] [-] j2kun|9 years ago|reply
- Little incentive for researchers to do this beyond their own good will.
- Most ML researchers are bad writers, and it's unlikely that the editing team will do the work needed (which is often a larger reorganization of a paper and ideas) to improve clarity.
- Producing great writing and clear, interactive figures, and managing an ongoing github repo require nontrivial amounts of extra time, and researchers already have strained time budgets.
- It requires you to learn git, front-end web design, random javascript libraries (I for one think d3 is a nuisance), exacerbating the time suck on tangents to research.
Maybe you could convince researchers to contribute with prizes that aligned with their university's goals. Just spitballing here, but maybe for each "top paper" award, get a team together to further clarify the ideas for a public audience, collaborate with the university and their department and some pop-science writers, and get some serious publicity beyond academic circles. If that doesn't convince a university administration that the work is worth the lower publication count, what will?
In the worst case it'll be the miserable graduate students' jobs to implement all these publication efforts, and they won't be able to spend time learning how to do research.
[+] [-] colah3|9 years ago|reply
In the short term, Distill's editorial assistance will help authors produce outstanding papers, although they need to be willing to work as well.
In the longer-term, I'd like to explore match making between data visualization people who would like to get into machine learning and machine learning researchers publishing papers.
And in the very long term, I think the right solution is to add a new component to the research ecosystem. Just like we we have people who specialize as research engineers, theoreticians, and experimentalists, I'd like to have a respected "research distiller" specialization. Eventually, I'd like to try and start special grants for research groups to have someone focused on this.
[+] [-] MLasstProf|9 years ago|reply
I'm a junior faculty working in ML with no personal knowledge of web development, d3, etc. While the papers currently on Distill are absolutely gorgeous and will be an invaluable tool for learning advanced ML concepts, I simply cannot see myself or my students putting the time to actually create something like that.
Unless a student is especially adept at the specific tools needed to create these and especially enthusiastic at using them, I will actively discourage them from doing it. The time needed is simply not worth it right now.
I would be happy and grateful if tools for creating these articles become easier to learn and use eventually, such that even the lower-budget, time-constrained researchers could afford to create them.
[+] [-] gabrielgoh|9 years ago|reply
I won't deny the time commitment needed for a distill article is not trivial - it is far more work than a technical blog. But in terms of a pure tradeoff of time per publication, the calculus makes sense. Most of the work of research distillation and synthesis is already part of the research process, and writing a distill article is just a matter of putting it all of down on paper. Doing research is a far more time consuming and less predictable process.
[+] [-] shancarter|9 years ago|reply
[+] [-] KCFforecast|9 years ago|reply
[+] [-] FractalNerve|9 years ago|reply
Hey let's be honest, most academics (that I know) still don't even use LaTeX (or refuse to do so). This is really cool, but requires way too many skills (in js/css3/html5/distill-extensions and node.js).
Personally, my team and I had really great experience with sharelatex.com, whom only I had knowledge about LaTeX. I liked that it's also opensource with a permissive license. I would rather host that on sandstorm.io the next time, or just pay for the comfort offered by overleaf.com (I've never seen such a beautiful colloborative LaTeX Editor).
• What about vendor lock-in?
• Can you export to LaTeX, Word or PDF?
• Can you selfhost it for your team or company?
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] morgangiraud|9 years ago|reply
I clearly don't expect people to do that much. I can only do that because i'm coming from web development, and very nice tools started to appear recently.
People in research needs a design framework like a set of templates for keynotes/PPT/JS/CSS (think about how much traction got bootstrap). Distill is doing an awesome jobs at showing the example of what you could do.
Maybe Distill could open-source the templates they use to build those blog post?
[+] [-] findjashua|9 years ago|reply
[+] [-] fmap|9 years ago|reply
On the other hand, being able to write well and to create good interactive illustrations are valuable skills. Maybe we could incorporate these things into seminars or otherwise crowdsource the creation of e.g. individual figures?
[+] [-] eb0la|9 years ago|reply
So, I guess this will get distill get traction.
[+] [-] colah3|9 years ago|reply
Google Research: https://research.googleblog.com/2017/03/distill-supporting-c...
DeepMind: https://deepmind.com/blog/distill-communicating-science-mach...
OpenAI: https://openai.com/blog/Distill/
YC Research: http://blog.ycombinator.com/distill-an-interactive-visual-jo...
Chris Olah: http://colah.github.io/posts/2017-03-Distill/
[+] [-] curuinor|9 years ago|reply
[+] [-] ThomPete|9 years ago|reply
As a side note who made the interface design for this?:
http://playground.tensorflow.org/#activation=tanh&batchSize=...
I am very interested in getting into this space from a design perspective.
[+] [-] llimllib|9 years ago|reply
(follow the thread)
[+] [-] billconan|9 years ago|reply
Thank you for this effort. I'm a fan of your blog articles. A question regarding Distill: is it a journal like conventional journal to target new research? Or it is a journal for educational articles to explain old researches better?
I hope to contribute to an effort to better explain deep learning. I don't know if that is what distill is looking for?
[+] [-] cs702|9 years ago|reply
How do I donate to this?
[+] [-] choxi|9 years ago|reply
As you'd imagine though, it's really hard. Reading a two page research paper is a very different experience from reading a NYTimes or WSJ article. The information density is enormous, the vocabulary is very domain specific, and it can take days or weeks of re-reading and looking up terms to finally understand a paper.
I'm really excited about Distill, there's a lot of value in making research papers more accessible and interesting. I've noticed that the ML/AI field has been very pioneering about research publication process, some papers are now published with source code on GitHub and the authors answering questions on r/machinelearning. This seems like a really great next step, I hope other fields of science will break away from traditional journals and do the same.
[+] [-] TuringNYC|9 years ago|reply
I worked in clinical research in a past life and studies would be highly discounted if they couldn't be reproduced. A highly detailed methods section was key. Many ML papers I see tend to have incredibly formalized LaTeX+Greek obsessed methods section, but far short of anything to allow reproduction. Some ML papers, i swear must have run their parameter searches a 1000 times to overfit and magically achieve 99% AUC.
Worse, I actually have tons of spare GPU farm capacity i'd love to devote to re-producing research, tweaking, trying it on adjacent datasets, etc. But the effort to re-produce is too high for most papers.
It is also disappointing to see various input datasets strewn about individuals' personal homepages, and sometimes end up broken. Sometimes the "original" dataset is in a pickled form after having already gone through multiple upstream transformations. I hope Distill can instill some good best practices to the community.
[+] [-] colah3|9 years ago|reply
It seems clear to me that the future will involve some kind of linking reproducibility to papers. If we want to find that future, we need a way for people to experiment with what a publication is.
[+] [-] minimaxir|9 years ago|reply
Will the papers published on Distill maintain transparency of the statistical process?
I see in the submission notes that articles are required to be a public GitHub repo, which is a positive indicator. Although the actual code itself does not seem to be a requirement.
[+] [-] shancarter|9 years ago|reply
[+] [-] Xeoncross|9 years ago|reply
Visualizations or algorithms described using code are much, much easier for me to understand and serve as a great starting point for unpacking the math explanations.
[+] [-] runemopar|9 years ago|reply
I would recommend picking up a book on Comp Sci or algorithms, even just a cursory reading helps a lot. CS is very much not just programming and it is heavily restricted by descriptions through code.
[+] [-] blinry|9 years ago|reply
[+] [-] cing|9 years ago|reply
[+] [-] shancarter|9 years ago|reply
[+] [-] andrew3726|9 years ago|reply
[+] [-] dang|9 years ago|reply
[+] [-] rememberlenny|9 years ago|reply
[+] [-] blackRust|9 years ago|reply
Should you plug that in to IFTTT, Zapier, or something to that extent, you hopefully then have a weekly feed.
Though I do agree, an option to signup to updates directly on the website would be much better ;)
[+] [-] sytelus|9 years ago|reply
Also, it is very likely that veterans in the field might think of this format as too verbose and too sugar coated, more appropriate for less math-savvy users and therefore not mainstream. Furthermore, I really feel TeX is irreplaceable unless you got all of its feature covered. All of the historic effort to replace TeX - even with bells and whistles of WYSIWYG editors - in research has failed and its important to learn from those failures. You will be surprised how many researchers insist on printing out the paper for reading even when they have access to tablets and PC.
Instead of being another peer reviewed journal, Distill could act as the following:
- platform to publish supplemental material and code
- platform to manage communication/issues post publication
- platform for readers to invite other readers for peer review and generate "front page" based on some sort of reviewer trust relationship.
- platform to host Python and MatLab code with web frontends without researchers having to learn new developer skills
- support pdf submissions but without all the eliteness of arxiv and using algorithms to create the "front page" based on some sort of peer reviewer rankings.
Above features are indeed sorely missing and Distill has good opportunity to become an "add-on" to current academic publishing systems as opposed to another peer reviewed journal.
[+] [-] transcranial|9 years ago|reply
[+] [-] fnl|9 years ago|reply
And what about long-term stability/presence. Most top journals and their publishing houses (NPG, Elsevier, Springer) are likely to hang around for another decade (or two...), while I don't feel so sure about that for a product like GitHub. Maybe Distill is/will be officially backed (financially) by the industry names supporting it?
That being said, I'd love seeing this succeed, but there seems much to be done to get this really "off the ground" beyond being a (much?!) nicer GitXiv.
[+] [-] colah3|9 years ago|reply
If you just apply the formulas anyways, you'll get an JIF of (6 citations)/(4 publications) = 1.5. Again, this number is really pessimistic because those publications are only a few months old and haven't had time to accumulate citations.
> And what about long-term stability/presence.
We aren't particularly tied to github besides it being convenient. Even if the journal died, keeping it up indefinitely would be very cheap.
More than that, we're looking into joining projects like LOCKSS to ensure preservation of the academic record.
> but there seems much to be done to get this really "off the ground" beyond being a (much?!) nicer GitXiv.
We've actually done a lot of the logistics needed to legitimize a journal. We've registered as a journal with the library of congress, joined CrossRef, and built infrastructure to integrate our metadata with the library system.
Of course, there's a lot more to do. But the biggest thing is to just publish great content and run Distill as a serious, high-quality venue.
[+] [-] radarsat1|9 years ago|reply
Because the blog post and title seems to be describing it as a "journal" intended to replace PDF publications, but the actual content appears to be more in the tutorial/survey category, e.g. "how to use t-SNE," etc. Is this intended to be a place to publish new research in the future, or is it meant more for enhanced "medium"-style blog posts?
Both are fine, I just find the dissonance between the announcement and the actual content a bit confusing.
[+] [-] chairmanwow|9 years ago|reply
I like the idea of having research being approachable for the non-scientist, and the more important question of whether there is a more efficient form (in terms of communicating new science between scientists) for research papers to take.
Is there any relevant work along this vector of thought that I should check out? Because I would really love to do some work on this.
[+] [-] sp4ke|9 years ago|reply
I made an awesome list recently just for this topic: github.com/sp4ke/awesome-explorables
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] ycHammer|9 years ago|reply
[+] [-] mysore|9 years ago|reply
I am a UI-developer who has been wanting to learn ML forever. I started working on
1. fast.ai 2. think bayes 3. UW data science @ scale w/ coursera 4. udacity car nano degree
I'm going to write some articles about what I learn and hopefully move into the ML field as a data engineer in 6 months. I figure I got into my current job with a visual portfolio of nicely designed css/js demos, maybe the same thing will work for AI.
[+] [-] Old_Thrashbarg|9 years ago|reply
[+] [-] colah3|9 years ago|reply
(One of the members of our steering committee, Michael Nielsen, has a significant history advocating for open science. I think there's about a snowball's chance in hell he'd be involved if we weren't. :P )
[+] [-] tyingq|9 years ago|reply
Passages like:
"Distill articles must be released under the Creative Commons Attribution license."
With a little more flexibility to keep things private before publishing: "You can keep it private during the review process if you would like"
[+] [-] JorgeGT|9 years ago|reply
[+] [-] allenz|9 years ago|reply
I agree that the DOI should be included in the BibTeX citation.
[+] [-] shancarter|9 years ago|reply