On the one hand, this is sorely needed: AI detection software will inevitably be mostly snake oil.
Academia and education desperately wants this software to work! As a result, selling them something that doesn't work is going to be very profitable.
The most obvious problem with this class of software is how easy it would be to defeat if the students could access it themselves: generate some text, run it through the detector, then fiddle with it (by manually tweaking it or by prompting the AI to "reword this to be less perfect") until it passes.
Which means these tools need to not be openly available... which makes them much harder to honestly test and evaluate, making it even easier to sell something that doesn't actually work.
But... I don't think this site is particularly convincing right now. It has spelling mistakes (which at least help demonstrate AI probably didn't write it) and the key "How AI Detection Software Works" page has a "Coming Soon" notice.
The "examples" page is pretty unconvincing right now too - and that's the page I expect to get the most attention: https://itwasntai.com/examples
It looks to me like this is still very much under development, and is not yet ready for wider distribution.
its too easy to be negative about things in hype cycles and retroactively look back and go "see! i was right! this was a terrible idea!" but.. this is a terrible idea
to ai detection fans: show us on an information theory basis how you will smuggle in enough bits, avoiding user obfuscation, please. i will change my mind and support you the moment you prove this can be done, otherwise i am default extremely skeptical
But why should educational instutions care? Education is a business, students are the customers (or in countries with state-funded education, the government). If AI helps people graduate faster, that's more money to the institutions, less effort to the students, and nice statistics for the governments.
At least in my country most degrees aren't worth much anyway, they just open you doors to internships where you really learn stuff. AI isn't going to make the situation any worse.
IMO one of the way that most schools are going to end up being able to detect plagiarism is going to be a custom word processor (or something similar) that can track all of the edits made into a document. Basically, have the students type an essay where all of the keystrokes are recorded by the program, and so it can be detected by the program whether someone is copy and pasting whole essays, or if someone is actually typing and revising the essay until it is submitted. Essays that are just turned in in general are probably going to be a thing of the past.
Maybe, but I doubt it. Spyware-based systems are doomed to failure as other commenters note. There's nothing you can do to prove the text came from a human. Faking inputs is extremely easy. People will sell a $20 USB dongle that does appropriate keyboard/mouse things. Worst case, people can simply type in the AI generated essay by hand and/or crib from it directly.
Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.
My wife started teaching a class at the local university. She had a bunch of positives on the anti-plagiarism software used by the university. She ran a bunch of papers by me and man, analyzing the results are an art within it's self. People will unconsciously remember and write down phrases and smaller sentences they have read all the time. A little highlight here and there just has to be accepted. Then there are the papers that almost the entire thing is highlighted. It's the ones in between that are tricky as hell. A lot could have gone either way and it's a judgement call on the teacher whether to send it to the administration for review. I expect AI will just make it more difficult or hand writing is going to be the new hot subject taught to new levels in elementary...
When I was in college we had a number of group projects and I thought the whole time that it would make a ton of sense for the professor to set up a class repo (I'm a old person so they would be a CVS repo at the time) and be able to see exactly what each person had contributed to the project. Even for single person projects it would have made it so much easier to detect cheaters. I also think it might light a fire under some of the less shameless slackers.
I hope schools do this now. Not only for detecting cheaters but to get the kids used to working in a more real world environment.
As a parent whose student has worked with multiple essay entry editors/forms, they're almost all terrible with most students having to revert to writing the essay outside the system or risk losing their work multiple times. And this was with a simple editor - not more complex connections to even more sophisticated systems.
The budget available for educational technology is not sufficient to maintain the operation of the software, let alone sufficient to pay technical staff adequate to assess and select reliable systems.
But then you can have ChatGPT write your essay on a phone/tablet and you just slowly re-write it.
I think schools will need to change the way they go about testing student understanding of topics. Personally I'm excited for what this might look like and it is a great opportunity for hackers to really innovate the educational field.
Or they could move to a more British style, with in-person essays, proctored by human observers (not that there aren't old-fashioned ways to cheat on those too, but they're well-known).
If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?
Surely, there are still things that are difficult to do even with the help of AI. Teach your students to use these tools, and then raise the bar. For example, ask your art students to make complex compositions or animations that can't be handled by Midjourney without significant effort.
I also had a similar idea on how to determine that a piece of writing is genuine. It would be to make students use a word processor that contains a full audit trail of all changes, timestamped. The software would then use a trained AI to look for patterns that deviate from normal composition activities. This could catch a lot of the current fraud. Until someone creates AI bots to get around it...
I don't know if this is the way things should go, but it seems like a decent prediction about how they probably will. In fact, many law school exams are already administered using "blue book" software that functions as rudimentary word processors that lock down the computer's other functions for the duration of the exam. Perhaps other disciplines use this software too.
In the exam context, this software probably already solves the AI problem. Locking down the computer would not, of course, be a solution for other kinds of assignments, but I'll bet it won't be long until schools are using software like you described that are just do a lot of snooping instead of locking down the computer.
Unfortunately, the existing software is very clunky and not very reliable. And it doesn't seem like anybody has a strong incentive to improve it. (The schools license the software, and the schools understandably don't care all that much whether the software is nice to use.)
The cat and mouse iteration will be using ChatGPT integrated with Webdriver to slowly type the essay, writing a prompt that says "make occasional mistakes", etc.
Wouldn't it still be easier to type out the entire AI generated assignment than to come up with an assignment and then type out the assignment you came up with yourself?
Maybe you connect to school chat AI and then it probes you for knowledge. Same AI watches you write essay type bits and helps you out if you get something wrong. Teacher will get report how well you did and how present you were.
Ironically, the best detector for plagiarism would be a 15 minute conversation asking the student about their research and opinions on the topics written, kind of like interviewing someone who claims redis expertise on their resume.
This is essentially the Oxbridge tutorial system where you have an hour(ish) meeting once(ish) a week with one or two other students and your tutor and talk over what you've learned, are set an assignment or three and have to answer any questions about the last week's assignments. A slightly more scalable version is the seminar system where you have up to 10 students and a tutor and you do roughly the same thing. It only works if participation is mandatory, missing seminars / tutorials is penalised and tutors are given leeway to adjust grades based on semina performance or flag students who do really well on their essays but appear incapable of explaining or defending what they've written.
Yes, that is true. ChatGPT refuses to work with VPN and regularly checks for your IP using Cloudflare. Most of these new AI-driven startups are using Cloudflare. It is funny that startups collect/scrap data from all over the web but don't want to scrap their chatbot response without paid API. I guess it is life...
I've been reviewing answers to questionnaires we send out to potential software engineering candidates. Sometimes candidates seem to write 90% of the submission themselves, and then use ChatGPT for the last couple of questions (which are more general, like "Outline your thoughts on documentation in software projects"). I joked to a colleague that I'd come up with a fool-proof ChatGPT detector in one line of Python:
How many essays are written and graded every year? Even with a 0.0001% FPR, how many students would be facing the severe punishments for plagiarism like auto-failing a course or even expulsion? Ironically enough, using AI to make such decisions seems like one of those unethical use cases.
This seems kind of pointless to me. Long before I entered the academic system, Turnitin had pioneered the industry of accusing students of plagiarism while simultaneously claiming unlimited license to their works.
They also built a parallel industry selling to services to students on how to avoid being considered as plagiarism.
In the real world, that is known as organized. But in academia, it is business as usual.
With the advent of new technology, so must entire practices and industries spring up to counteract the inherent harm this technology will cause and is already causing.
is it a given that technological progress will often necessitate societal harm? Is such technological progress actually progress for humanity?
there seems to be this universal notion that "things that can be built will be built and are inevitable". It is for example argument #1 anytime anyone suggests we should be manufacturing and selling fewer guns - that this is not possible, since guns are "inevitable". You can 3-d print them after all! Therefore, everyone must be armed and we must live in an armed society with regular mass shootings, because what can we do? It's also a ubiquitous slogan used around AI - that AI is "inevitable". It's already out there, Google internal docs are betting that OSS AI will become the norm, and that's that. AI will be everywhere used for everything, making it's fairly unreliable decisions about things like who broke into your house last night, who's likely to be shoplifting, is that a bike in the crosswalk or just nothing at all, etc., and that's now the world we live in.
Are humans as a species perhaps in need of better ways to not build things, since right now every possible thing that is imagined and becomes possible therefore "must" be built, en masse, and humanity's occupation becomes mitigating the species against all the harms brought about by all this "progress".
anyway that's the low blood sugar version. I'll likely have not much to say after lunch
Why not to build nukes massively? Nukes are inevitable, because in the human nature there is inherent hunger for power (ask any psychologist nearby), and nukes are the ultimate power.
With nukes at reasonable prices for wealthy families, the notion of an "atomic family" would really get a new exciting meaning!
I am ofc sarcastic, but the reasoning is IMO the same like with the AI..
The real question here is: why to refrain from building nukes and why to refrain from uncontrollably developing AI.
Hey, I was just this idea the other day. One can imagine a world in which all new tech has to go through a period of deliberation before it ever sees the light of day.
There would clearly be a lot of things that would be blocked. Some of them would be good. Even today, we have problems like new drugs being rejected due to risks, when people are dying due to lack of treatment. That kind of thing might get worse.
On the other hand, we might have stopped Thimerosal, leaded gasoline, social media addiction, high fructose corn syrup, CFCs, and perhaps been a lot more careful about fossil fuels before they did so much damage. There are probably more technologies I haven't thought of -- it's easy to forget the ones we don't use anymore.
I don't know if it would be a good thing on average. Delaying technology has costs. BUT, when it comes to technologies that carry existential risk, like fossil fuels (I believe) AGI, I think it's likely worth it. Gotta play it safe sometimes, so you can keep playing.
It's not just university students anymore. My 11yo kid got accused of being a robot on a physics competition where the only reward is a (paid) summer camp full of extra physics lessons. All that was needed to trigger the accusation was a bit less fluent explanation of the solution, something you would expect from a student struggling with a difficult task. People are growing unreasonably paranoid.
What is up with these snowflake neues, posting bitter over-the-top political comments?
They stick out like a sore thumb in spring 2023.
It flew under the radar more when everyone was short-tempered, say winter 2021.
The people who are still stuck on it and in their own heads seem to have a __negative__ herding effect. There's a seed of irrationality that drives some, and as there's fewer, they stick out more, making it more grating and driving more people away.
This is an "everything sucks all around situation" because since real things are tied to academic performance you have to weed out dishonesty for fairness but also the power disparity between student/teacher and the black box nature of the detection makes it impossible to actually prove your innocence.
I wish more than anything that the availability of AI will at some point force schools to restructure how classes work to make cheating like this a non-issue. Higher education is actually unbelievably horrible at actually educating. I only realized that once I graduated and on a whim wanted to learn about something that requires university level expertise. If you're not there for the credential it's a monumental waste of time. If classes were designed for students who wanted to be there and the grades were only for your benefit and not used as a target for anything you might actually have engaged learners.
I am compelled to point out that in one of the info pages, the site includes screenshots of a conversation with ChatGPT where the author claims to trick AI detection by generating text with a lower temperature. But asking ChatGPT, through the LLM interface, to lower the temperature doesn't lower the temperature. There's no mechanism for it to do so. It may have some (nevertheless real) placebo effect, because the LLM thinks it should behave different and assigns some vague "meaning" to "temperature" -- but this isn't a technical change to the model operation.
this entire line of reasoning (using AI to detect AI, with disastrous results) is ripe for a giant lawsuit. a sufficiently wealthy school is bound to accuse a sufficiently wealthy student at some point.
oooh I like that, the student can sue for copyright infringement because the teacher uploaded their work and proved that they uploaded it?
sounds like a simple sublicensing clause imposed on the student will fix that, but the next few semesters a few examples can be made of the teachers and institutions
That's was litigated years ago, for TurnItIn.[1] Uploading student papers for plagiarism checking purposes was considered fair use by a US federal court.
What if we just let cheaters cheat? If they don't have the knowledge, they won't last long in a job that requires it. As the saying goes "You're only cheating yourself"
Teachers need to ask students to write things that are hard for AI to cheat on: if a bunch of humans end up writing very similar essays to the prompt - that's a prompt problem!
I like to think that when I was in college I wrote with enough flair and personality that no one could mistake me for an AI. Perhaps I'm overestimating myself.
If a professor fails you because they thought your final essay was written by an AI and it wasn't, do you have legal grounds to start a lawsuit against the school?
Certainly anyone can start a lawsuit. Maybe there's eventually a chance for some kind of class action prevail, but I don't see how an individual student could make a dent against the resources of a university.
It works…ish. GPT4 is pretty good at detecting what it wrote but I was able to get a false positive with the United States constitution. Or maybe we can go deeper and say maybe it was AI generated?
That's true, but it's also straightforward to train detectors with massive quantities of human and AI-generated text. So AIs can be trained by the latest detectors, but detectors can also trained by the latest AIs. It will probably come down to which side has more resources to devote to training.
I am concerned about the false positive rate on AI detectors. But if they are being used to flag submissions for further review only then I can go with it. Finding out if a student cheated shouldn't be too hard, just quiz them on what they wrote.
But tl;dr many students have been accused of using AI by teachers who think that AI detection software works, when it really doesn't. So the goal of this site is to communicate to teachers that AI detection software isn't reliable.
simonw|2 years ago
Academia and education desperately wants this software to work! As a result, selling them something that doesn't work is going to be very profitable.
The most obvious problem with this class of software is how easy it would be to defeat if the students could access it themselves: generate some text, run it through the detector, then fiddle with it (by manually tweaking it or by prompting the AI to "reword this to be less perfect") until it passes.
Which means these tools need to not be openly available... which makes them much harder to honestly test and evaluate, making it even easier to sell something that doesn't actually work.
But... I don't think this site is particularly convincing right now. It has spelling mistakes (which at least help demonstrate AI probably didn't write it) and the key "How AI Detection Software Works" page has a "Coming Soon" notice.
The "examples" page is pretty unconvincing right now too - and that's the page I expect to get the most attention: https://itwasntai.com/examples
It looks to me like this is still very much under development, and is not yet ready for wider distribution.
swyx|2 years ago
its too easy to be negative about things in hype cycles and retroactively look back and go "see! i was right! this was a terrible idea!" but.. this is a terrible idea
to ai detection fans: show us on an information theory basis how you will smuggle in enough bits, avoiding user obfuscation, please. i will change my mind and support you the moment you prove this can be done, otherwise i am default extremely skeptical
visarga|2 years ago
Nowadays if you want to be convincing you got to maek some spelling misakes. Something that looks like predictive keyboard errors, or typing errors.
JohnFen|2 years ago
Probably so. The problem, of course, is that the inability to detect AI authorship leads to the increase of general distrust of everything in society.
rorroe53|2 years ago
At least in my country most degrees aren't worth much anyway, they just open you doors to internships where you really learn stuff. AI isn't going to make the situation any worse.
paulddraper|2 years ago
aripickar|2 years ago
throwaway09223|2 years ago
Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.
I predict more in-person learning interactions.
BashiBazouk|2 years ago
tejtm|2 years ago
jandrese|2 years ago
I hope schools do this now. Not only for detecting cheaters but to get the kids used to working in a more real world environment.
dv_dt|2 years ago
The budget available for educational technology is not sufficient to maintain the operation of the software, let alone sufficient to pay technical staff adequate to assess and select reliable systems.
adoxyz|2 years ago
I think schools will need to change the way they go about testing student understanding of topics. Personally I'm excited for what this might look like and it is a great opportunity for hackers to really innovate the educational field.
blacksmith_tb|2 years ago
welshwelsh|2 years ago
If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?
Surely, there are still things that are difficult to do even with the help of AI. Teach your students to use these tools, and then raise the bar. For example, ask your art students to make complex compositions or animations that can't be handled by Midjourney without significant effort.
krunck|2 years ago
pdabbadabba|2 years ago
In the exam context, this software probably already solves the AI problem. Locking down the computer would not, of course, be a solution for other kinds of assignments, but I'll bet it won't be long until schools are using software like you described that are just do a lot of snooping instead of locking down the computer.
Unfortunately, the existing software is very clunky and not very reliable. And it doesn't seem like anybody has a strong incentive to improve it. (The schools license the software, and the schools understandably don't care all that much whether the software is nice to use.)
jsf01|2 years ago
88913527|2 years ago
asdajksah2123|2 years ago
supriyo-biswas|2 years ago
opwieurposiu|2 years ago
soulofmischief|2 years ago
gitfan86|2 years ago
unknown|2 years ago
[deleted]
bryzaguy|2 years ago
plastic3169|2 years ago
jfghi|2 years ago
r_hoods_ghost|2 years ago
minimaxir|2 years ago
krossitalk|2 years ago
nixcraft|2 years ago
jszymborski|2 years ago
benhoyt|2 years ago
kuratkull|2 years ago
tikkun|2 years ago
morkalork|2 years ago
btilly|2 years ago
https://stealthoptional.com/news/us-constitution-flagged-as-...
As long as that kind of egregious mistake is possible, we should look at such tools with suspicion.
sidewndr46|2 years ago
They also built a parallel industry selling to services to students on how to avoid being considered as plagiarism.
In the real world, that is known as organized. But in academia, it is business as usual.
zzzeek|2 years ago
is it a given that technological progress will often necessitate societal harm? Is such technological progress actually progress for humanity?
there seems to be this universal notion that "things that can be built will be built and are inevitable". It is for example argument #1 anytime anyone suggests we should be manufacturing and selling fewer guns - that this is not possible, since guns are "inevitable". You can 3-d print them after all! Therefore, everyone must be armed and we must live in an armed society with regular mass shootings, because what can we do? It's also a ubiquitous slogan used around AI - that AI is "inevitable". It's already out there, Google internal docs are betting that OSS AI will become the norm, and that's that. AI will be everywhere used for everything, making it's fairly unreliable decisions about things like who broke into your house last night, who's likely to be shoplifting, is that a bike in the crosswalk or just nothing at all, etc., and that's now the world we live in.
Are humans as a species perhaps in need of better ways to not build things, since right now every possible thing that is imagined and becomes possible therefore "must" be built, en masse, and humanity's occupation becomes mitigating the species against all the harms brought about by all this "progress".
anyway that's the low blood sugar version. I'll likely have not much to say after lunch
kunley|2 years ago
Why not to build nukes massively? Nukes are inevitable, because in the human nature there is inherent hunger for power (ask any psychologist nearby), and nukes are the ultimate power. With nukes at reasonable prices for wealthy families, the notion of an "atomic family" would really get a new exciting meaning!
I am ofc sarcastic, but the reasoning is IMO the same like with the AI.. The real question here is: why to refrain from building nukes and why to refrain from uncontrollably developing AI.
NumberWangMan|2 years ago
There would clearly be a lot of things that would be blocked. Some of them would be good. Even today, we have problems like new drugs being rejected due to risks, when people are dying due to lack of treatment. That kind of thing might get worse.
On the other hand, we might have stopped Thimerosal, leaded gasoline, social media addiction, high fructose corn syrup, CFCs, and perhaps been a lot more careful about fossil fuels before they did so much damage. There are probably more technologies I haven't thought of -- it's easy to forget the ones we don't use anymore.
I don't know if it would be a good thing on average. Delaying technology has costs. BUT, when it comes to technologies that carry existential risk, like fossil fuels (I believe) AGI, I think it's likely worth it. Gotta play it safe sometimes, so you can keep playing.
guy98238710|2 years ago
dahwolf|2 years ago
refulgentis|2 years ago
They stick out like a sore thumb in spring 2023.
It flew under the radar more when everyone was short-tempered, say winter 2021.
The people who are still stuck on it and in their own heads seem to have a __negative__ herding effect. There's a seed of irrationality that drives some, and as there's fewer, they stick out more, making it more grating and driving more people away.
zirgs|2 years ago
richbell|2 years ago
> What was the name of H.P. Lovecraft's cat?
jacobsenscott|2 years ago
Spivak|2 years ago
I wish more than anything that the availability of AI will at some point force schools to restructure how classes work to make cheating like this a non-issue. Higher education is actually unbelievably horrible at actually educating. I only realized that once I graduated and on a whim wanted to learn about something that requires university level expertise. If you're not there for the credential it's a monumental waste of time. If classes were designed for students who wanted to be there and the grades were only for your benefit and not used as a target for anything you might actually have engaged learners.
ptdn|2 years ago
inconceivable|2 years ago
Balgair|2 years ago
Mostly because it really gets at the root of the issues in education.
Like, fine, you have made some system where cheating is impossible. Great.
But have your students learned anything?
If educators put in even a iota of effort into learning their students, then they know who is cheating and who isn't.
But if they put that same amount back into teaching, then everyone wins.
Education is not a contest with winner and losers.
(Yes, ok, you went to a bad school where it was a contest for your pre-med degree. Look where that has gotten US healthcare.)
yieldcrv|2 years ago
sounds like a simple sublicensing clause imposed on the student will fix that, but the next few semesters a few examples can be made of the teachers and institutions
will pay off that tuition
Animats|2 years ago
[1] https://macleans.ca/education/uniandcollege/judge-rules-anti...
petesergeant|2 years ago
dweinus|2 years ago
cwkoss|2 years ago
TRiG_Ireland|2 years ago
okdood64|2 years ago
jacobsenscott|2 years ago
moffkalast|2 years ago
flatiron|2 years ago
minimaxir|2 years ago
It is a very, very hard problem.
thih9|2 years ago
I’m seeing an error message from cloudflare.
Is the website working for anyone else? Is there an archive / mirror?
ehsankia|2 years ago
jvanderbot|2 years ago
danenania|2 years ago
jandrese|2 years ago
dbrereton|2 years ago
But tl;dr many students have been accused of using AI by teachers who think that AI detection software works, when it really doesn't. So the goal of this site is to communicate to teachers that AI detection software isn't reliable.
I originally discovered this in a reddit comment which you can see here: https://www.reddit.com/r/ChatGPT/comments/13hi5y6/comment/jk...
andai|2 years ago
unknown|2 years ago
[deleted]
kunley|2 years ago