martingab's comments

martingab | 3 years ago | on: AI-guided robots are ready to sort recyclables

In my home-town there is a garage/factory which mainly recruits all kinds of disabled people. They produce e.g. wooden chairs, cup holders and the-like but also purely artistic decorative stuff. All on very different levels i.e. some people nail some wooden sticks together as they have been told, some have complex jobs (if they are able).

For sure, all of the tasks they do during the day can be automated and in-fact are automated at the big facilities. However, people like to buy their products - not only because they like to support them and to give them the chance to contribute to the community (like it was said in the video: they pay taxes and everything) but also because there is a general market for hand-made stuff. There is always a small niche of consumers who prefer hand-made stuff because of its individual charm etc. over the soul-less mass-manufactured alternative. I believe that this demand exists precisely because of the rise of automation (and thus is unlikely to vanish is automation is pushed furhter).

I'd also argue that working in a woodworking shop - being able to actually create something and (if the handicap/IQ allows for it) even be creative - has a much better effect on overall quality of live than working on a assembly-line bullshit job. I don't know of a handicap which does not allow you to work in any of these kind of jobs but does allow you to assort plastic from paper within reasonable amount of time (but I'm sure someone can give me an example; in that case I'd argue that its up to us to find or "invent" a suitable job or some helper-device to enable them to do so - we have the money to do this).

So yes, maybe getting rid of that particular job is the best thing that could happen to the guy in the video - provided there is another job-alternative available that does not only let him add value to the consumer-society but also to the intellectual and creative parts of it as well as of himself (relative to his level of disability).

martingab | 3 years ago | on: My students cheated... a lot

This was one of the reasons why the use of such tools was strictly prohibited at my former university.

Another argument given was: even if you only have to *think* about using such a tool you are already in a situation where good scientific practices is no longer guaranteed. In other words: if you/the students would have followed all rules of good scientific practise right from the beginning, you would never need to use such a tool. But I guess if you are the developer of such a tool or work in that area of research, you probably see things differently...

Also; how many different ways are there to explain rules on scientific practices within 150 word? How much similarities would you expect from O(100) different students - even if they write independently? - I'm not sure if that is taken into account in such tools. On a different scale: when piping e.g. a typical PhD thesis though such a tool, the first introductory paragraphs will always have red flags (simply because that topic was already introduced 10000 times and everyone read more or less the same introductory textbooks). The important part - the main part of the thesis - of course should be unique (but if the supervisor/examist/committee is not able to "detect" this in their own, well...). Of course literally copy&paste an introduction is still not okay. But -as the blogger also said- this can easily detected by issuing a simple yawhoogle search in case the text already reads suspicious (e.g. if the style of writing varies a lot between paragraphs etc).

So yes, I'd agree that the use of such tools is relatively limited when it comes to "real" scientific works but in this particular case it was quite neat to see how easily you can use it to atomatise the collection of evidences if you have a large class of students...

martingab | 4 years ago | on: Vim Color Schemes

My first thought was that this is some kind of delayed april fool, since all color schemes looked the same (b/w) to me, until I realised that I have to turn-on javascript.

martingab | 4 years ago | on: Today Sci-Hub is 10 years old. I'll publish 2M new articles to celebrate

Depending on the field of research it is quite common to publish only in open access journals [1]. At my university, we are actually only allowed to publish in peer-reviewed OA journals. Most of them provide a paid subscription for the print version while the online version is free and open to everyone.

However, I got the (personal) impression, that this is only well established in fundamental research (which typically comes with few economic interest). As soon as the research is not paid by the state but by private companies (such as in medicine, robotics or any other "applied sciences"), scientists have a hard time to choose a OA journal (i.e. either it does not exist or you are not allowed to publish there). Changing this scheme is of course quite difficult, since too many commercial parties still benefit from it (which likely can only be change by law)...

[1] https://en.wikipedia.org/wiki/Open_access

martingab | 4 years ago | on: How to Fairly Share a Watermelon

Abstract: Geometry, calculus and in particular integrals, are too often seen by young students as technical tools with no link to the reality. This fact generates into the students a loss of interest with a consequent removal of motivation in the study of such topics and more widely in pursuing scientific curricula. With this note we put to the fore a simple example of practical interest where the above concepts prove central; our aim is thus to motivate students and to reverse the dropout trend by proposing an introduction to the theory starting from practical applications. More precisely, we will show how using a mixture of geometry, calculus and integrals one can easily share a watermelon into regular slices with equal volume.

martingab | 4 years ago | on: We Switched to Mattermost

We run a stupidly simple (though small scale) gitlab instance in docker (official image its kinda one-click install). You get mattermost automatically, just need to enable it in the gitlab config and it will install/run mattermost within the gitlab container without any troubles. So if you already have (self-hosted) gitlab, mattermost is easy to setup and maintain (actually no extra effort at all). Depends of course on the number of users, I guess...

The integrations with gitlab issues/groups/etc are also look neat but are barely used tbh. Cant compare with rocket.chat.

martingab | 5 years ago | on: Google Alternatives

Can anyone explain to me why exactly startpage.com is not listed? - They also list other meta searchers which have ads. I know some of the rumours about startpage but not mentioning it at all made me wonder what are the criteria for the list or whether I've missed something very bad about starpage (such that it becomes obvious to not list it)...

martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?

They didn't rebuild the Tevatron but still were able to rediscover the top within a different experimental environment (i.e. LHC with tons of different discovery channels) and have lots of fits for it properties from indirect measurements (LEP, Belle). Physics is not an exact science. If you have only one measurement (no matter if its software- or hardware-based), no serious physicist would fully trust in the result as long as it wasn't confirmed by an independent research group (by doing more than just rebuilding/copying the initial experiment but maybe using slightly different approximations or different models/techniques). I'm not so much in computer science, but I guess here it might be a bit different ones a prove is based on rigorous math. However even if so, I guess, it's sometimes questionable if the prove is applicable to real-world systems and then one might be in a similar situation.

Anyways, in physics they always require several experimental proves for our theory. They also have several "software experiments" for e.g. predicting the same observables. Therefore, researchers need to be able to compile and run the code of their competitors in order to compare and verify the results in detail. In this place, bug-hunting/fixing is sometimes also taking place - of course. So applying the articles suggestions would have the potential to accelerate scientific collaboration.

btw; I know some people who do still work with the data taken at the LEP experiment which was shut down almost 20 (!) years ago and they have a hard time in combining old detector-simulations, monte-carlos etc. with new data-analysis techniques for the exact same reasons mentioned in the article. For large-scale experiments it is a serious problem which nowadays has much more attention than at LEP ages, since LHC has anyways obvious big-data problems to solve before their next upgrade, including also software-solutions.

martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?

I see your and MaxBarraclough concerns. In my case, there exist 5-6 codes which do -at their core- the same thing as ours does and they all have been cross-checked against each other within either theoretical or numerical precision (where possible). That's the spirit that sjburt was referring to, I guess, and which triggered me because it is only true to a certain extend.

The cross-checking is anyways good scientific practise, not only because of bugs in the code (that's actually a sub-leading problem imho), but because of the degree of difficulty of the problems and the complexity of their solutions (and their reproducibility). In that sense, cross-checking should discover both, scientific "bugs" and programming-bugs. The "debugging" is partly also done at the community level - at least in our field of research.

However, it is also a matter of efficiency. I -and many others too- need to re-implement not because of bug-hunting/cross-checking but simply because we do not understand the "ugly" code of our colleagues and instead of taking the risk to break existing code we simply write new one which is extremely inefficient (others may take the risk and then waste months on debugging and reverse-engineering which is also inefficient). So my point on writing "good code" is not so much about avoiding bugs but about being kind to you colleagues, saving them nerves and time (which they can then spend on actual science) and thus also saving taxpayers money...

martingab | 5 years ago | on: Challenge to scientists: does your ten-year-old code still run?

That's exactly the philosophy we follow e.g. in particle physics and its a common excuse to dismiss all guidelines made in the article. However, this kind of validation/falsification is often done between different research groups (maybe using different but formally equivalent approaches) while people within the same group have to deal with the 10 years old code base.

I myself had very bad experience with extending the undocumented Fortran 77 code (lots of gotos and common blocks) of my supervisor. Finally, I decided to rewrite the whole thing including my new results instead of just somehow embedding my results into the old code for two reasons: (1) I'm presumably faster in rewriting the whole thing including my new research rather than struggling with the old code and (2) I simply would not trust in the numerical results/phenomenology produced by the code. After all, I'm wasting 2 months of my PhD for the marriage of my own results with known results which -in principle- could have been done within one day if the code base would allow for it.

So yes, If it's a one-man-show I would not give too much on code quality (though unit tests and git can safe quite a lot of time during development) but if there is a chance that someone else is going to touch the code in near future it will save time to your colleagues and improve the overall (scientific) productivity.

PS: quite excited about my first post here

page 1