martingab | 3 years ago | on: AI-guided robots are ready to sort recyclables
martingab's comments
martingab | 3 years ago | on: My students cheated... a lot
Another argument given was: even if you only have to *think* about using such a tool you are already in a situation where good scientific practices is no longer guaranteed. In other words: if you/the students would have followed all rules of good scientific practise right from the beginning, you would never need to use such a tool. But I guess if you are the developer of such a tool or work in that area of research, you probably see things differently...
Also; how many different ways are there to explain rules on scientific practices within 150 word? How much similarities would you expect from O(100) different students - even if they write independently? - I'm not sure if that is taken into account in such tools. On a different scale: when piping e.g. a typical PhD thesis though such a tool, the first introductory paragraphs will always have red flags (simply because that topic was already introduced 10000 times and everyone read more or less the same introductory textbooks). The important part - the main part of the thesis - of course should be unique (but if the supervisor/examist/committee is not able to "detect" this in their own, well...). Of course literally copy&paste an introduction is still not okay. But -as the blogger also said- this can easily detected by issuing a simple yawhoogle search in case the text already reads suspicious (e.g. if the style of writing varies a lot between paragraphs etc).
So yes, I'd agree that the use of such tools is relatively limited when it comes to "real" scientific works but in this particular case it was quite neat to see how easily you can use it to atomatise the collection of evidences if you have a large class of students...
martingab | 4 years ago | on: Vim Color Schemes
martingab | 4 years ago | on: Today Sci-Hub is 10 years old. I'll publish 2M new articles to celebrate
However, I got the (personal) impression, that this is only well established in fundamental research (which typically comes with few economic interest). As soon as the research is not paid by the state but by private companies (such as in medicine, robotics or any other "applied sciences"), scientists have a hard time to choose a OA journal (i.e. either it does not exist or you are not allowed to publish there). Changing this scheme is of course quite difficult, since too many commercial parties still benefit from it (which likely can only be change by law)...
martingab | 4 years ago | on: How to Fairly Share a Watermelon
martingab | 4 years ago | on: We Switched to Mattermost
The integrations with gitlab issues/groups/etc are also look neat but are barely used tbh. Cant compare with rocket.chat.
martingab | 5 years ago | on: Google Alternatives
martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?
Anyways, in physics they always require several experimental proves for our theory. They also have several "software experiments" for e.g. predicting the same observables. Therefore, researchers need to be able to compile and run the code of their competitors in order to compare and verify the results in detail. In this place, bug-hunting/fixing is sometimes also taking place - of course. So applying the articles suggestions would have the potential to accelerate scientific collaboration.
btw; I know some people who do still work with the data taken at the LEP experiment which was shut down almost 20 (!) years ago and they have a hard time in combining old detector-simulations, monte-carlos etc. with new data-analysis techniques for the exact same reasons mentioned in the article. For large-scale experiments it is a serious problem which nowadays has much more attention than at LEP ages, since LHC has anyways obvious big-data problems to solve before their next upgrade, including also software-solutions.
martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?
The cross-checking is anyways good scientific practise, not only because of bugs in the code (that's actually a sub-leading problem imho), but because of the degree of difficulty of the problems and the complexity of their solutions (and their reproducibility). In that sense, cross-checking should discover both, scientific "bugs" and programming-bugs. The "debugging" is partly also done at the community level - at least in our field of research.
However, it is also a matter of efficiency. I -and many others too- need to re-implement not because of bug-hunting/cross-checking but simply because we do not understand the "ugly" code of our colleagues and instead of taking the risk to break existing code we simply write new one which is extremely inefficient (others may take the risk and then waste months on debugging and reverse-engineering which is also inefficient). So my point on writing "good code" is not so much about avoiding bugs but about being kind to you colleagues, saving them nerves and time (which they can then spend on actual science) and thus also saving taxpayers money...
martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?
I myself had very bad experience with extending the undocumented Fortran 77 code (lots of gotos and common blocks) of my supervisor. Finally, I decided to rewrite the whole thing including my new results instead of just somehow embedding my results into the old code for two reasons: (1) I'm presumably faster in rewriting the whole thing including my new research rather than struggling with the old code and (2) I simply would not trust in the numerical results/phenomenology produced by the code. After all, I'm wasting 2 months of my PhD for the marriage of my own results with known results which -in principle- could have been done within one day if the code base would allow for it.
So yes, If it's a one-man-show I would not give too much on code quality (though unit tests and git can safe quite a lot of time during development) but if there is a chance that someone else is going to touch the code in near future it will save time to your colleagues and improve the overall (scientific) productivity.
PS: quite excited about my first post here
For sure, all of the tasks they do during the day can be automated and in-fact are automated at the big facilities. However, people like to buy their products - not only because they like to support them and to give them the chance to contribute to the community (like it was said in the video: they pay taxes and everything) but also because there is a general market for hand-made stuff. There is always a small niche of consumers who prefer hand-made stuff because of its individual charm etc. over the soul-less mass-manufactured alternative. I believe that this demand exists precisely because of the rise of automation (and thus is unlikely to vanish is automation is pushed furhter).
I'd also argue that working in a woodworking shop - being able to actually create something and (if the handicap/IQ allows for it) even be creative - has a much better effect on overall quality of live than working on a assembly-line bullshit job. I don't know of a handicap which does not allow you to work in any of these kind of jobs but does allow you to assort plastic from paper within reasonable amount of time (but I'm sure someone can give me an example; in that case I'd argue that its up to us to find or "invent" a suitable job or some helper-device to enable them to do so - we have the money to do this).
So yes, maybe getting rid of that particular job is the best thing that could happen to the guy in the video - provided there is another job-alternative available that does not only let him add value to the consumer-society but also to the intellectual and creative parts of it as well as of himself (relative to his level of disability).