35

ChatGPT generated answers have been found across the Stack Exchange network and some examples have also been recently seen and removed from Ask Ubuntu. Our colleagues on Stack Overflow have put a temporary ban on all such posts while other sites have begun debate on what their positions will be. Stack Exchange itself has decided not to institute a network ban. So now for Ask Ubuntu...

There are many issues with ChatGPT answers but for me the main issue has been that they look like they might be good answers, (the ones I have seen have been beautifully written), but relying as they do merely on a statistical method they can very, very easily be completely incorrect. The accuracy or otherwise of such an answer has not usually been tested and thus has never been fully evaluated before being presented as a definitive answer to a group of users who are often new to Ubuntu and are often very trusting of such material presented on Ask Ubuntu.

Such answers therefore carry the risk of being harmful to the users of Ask Ubuntu and also harmful to the reputation of Ask Ubuntu as a quality Question and Answer site.

The other major issue is plagiarism. AI posts cannot be effectively referenced as their data comes from a soup of information that cannot ever be effectively referenced. As well many of those using ChatGPT to post will attempt to conceal this usage, thus doubly violating AU plagiarism rules.

My own thought is that such posts have no place on Ask Ubuntu and that, as Stack Overflow, Super User snd other areas have done, we should institute a temporary ban on ChatGPT and similar AI technologies. 'Temporary' while a clearer picture is sought on the role of such technology across the entire Stack Exchange network.

This ban, IMO, would involve:

  1. Deletion of all ChatGPT content, whether acknowledged as such or not
  2. Moderator email for initial occurrence, if minor in scope
  3. Seven day Suspension for subsequent occurrence, or initial occurrence if larger in scope, and then the usual escalation of penalties for further infractions

I would welcome input from those who believe that AI generated answers have a place on AU as well as those who feel these answers have no place. There is room on AU Meta for varied opinions and vigorous discussion.

Your thoughts?

Further References:

  1. Use of ChatGPT is now banned on Super User: Our colleagues on Super User have now instituted a complete ban on ChatGPT content.
  2. Why posting GPT and ChatGPT generated answers is not currently acceptable: The Stack Overflow rational for banning ChatGPT answers.
  3. Announcement: AI generated answers are officially banned here: An announcement on English Language and Usage completely banning technology such as ChatGPT from their area.
16
  • So would this ban only be for answers or would it also be on AI assisted questions? Most of evils I have seen so far only apply to answers. If AI is used for questions it would certainly reduce the length of the close vote queue. Commented Dec 19, 2022 at 8:15
  • I am honestly here for discussion rather than impose my own views. I have personally not seen this technology used for questions, do you have a link?
    – andrew.46 Mod
    Commented Dec 19, 2022 at 8:21
  • openai.com , I think that I will be joining OpenAI in about two minutes. Commented Dec 19, 2022 at 8:59
  • @C.S.Cameron I have been chatting there for the last half hour. Interesting to say the least... chat.openai.com/auth/login
    – andrew.46 Mod
    Commented Dec 19, 2022 at 9:10
  • 2
    I'm having a hard time joining OpenAI, they say that a Sri Lankan phone number looks suspicious. Commented Dec 19, 2022 at 11:53
  • 2
    How are you going to identify such answers? Is it by volume alone, or also if someone steps in and says "This can't be right" - that could be true for human answers as well? Commented Dec 19, 2022 at 12:06
  • 9
    "relying as they do merely on a statistical method they can very, very easily be completely incorrect" — I'd even say, the ChatGPT answers are most often incorrect. I've been trying to use it in various contexts for the past week, including on my job, and I can assure, barring very simple and easily searchable cases, usually they are incorrect. ChatGPT makes up non-existing options to various programs, and even makes up events of these options appearing in a particular release of the program. It stacks up incorrect combinations of parameters. It's a funny toy, but… its usecases are shallow.
    – Hi-Angel
    Commented Dec 19, 2022 at 17:16
  • 5
    @ArturMeinild Identifying the ChatGPT answers is problematical. I have seen 5 on AU, one gave attribution and the other 4 did not. There are however pattens seen in users posting style and patterns seen in the ChatGPT output that is reasonably easy to spot. Best not give too many details in Meta I suspect. The problem is that identifying such posts is time consuming, and this is where SO got into trouble...
    – andrew.46 Mod
    Commented Dec 19, 2022 at 23:21
  • 7
    From meta.stackoverflow.com/questions/421831/… : "The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure." One or two is a misdemeanor by fools, easily handled. Overloading/Swamping is a malicious attack and offenders should be treated like other attackers, bots, spammers and similar scum.
    – user535733
    Commented Dec 20, 2022 at 2:01
  • 2
    Who is going to up-vote an answer that is nonsense? Who is going to continue posting ChatGPT answers if they all get down-voted. Reputation in the future will continue to have the same purpose as reputation today. Commented Dec 20, 2022 at 4:15
  • It takes a bot to know a bot. Commented Dec 20, 2022 at 4:18
  • 13
    @C.S.Cameron "Who is going to up-vote an answer that is nonsense?" <- you must be new here 😊 Commented Dec 20, 2022 at 13:58
  • @Jacob Vlijm 8<) Commented Dec 21, 2022 at 1:15
  • 3
    For fun, I copied this question verbatim and asked the bot about it. This was the response: i.stack.imgur.com/PgGpo.png. Even the bot agrees to ban it xD
    – Dan
    Commented Dec 28, 2022 at 10:27
  • @Dan hilareous. So, AI does make sense after all 😋 Commented Dec 28, 2022 at 15:53

8 Answers 8

19

100 % Agree with the ban, Specially for Ask Ubuntu.

Here are some points I came up with against allowing ChatGPT answers:

  • ChatGPT is not "Sentient" , it was trained on answers from forums like these.
  • It was trained on outdated information before 2021.
  • Inaccurate answers will cause more harm specially in these forums as we are dealing with operating systems.
  • It is very overconfident spitting out wrong answers.
  • It wont be able to help with/fix issues arising from suggested solution.
  • "The model would not ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended" - https://openai.com/blog/chatgpt/

On the other hand.

There are 78,466 questions with no answers.

The community feels to be getting weaker with every passing day.

People often get discouraged from trying out new distributions / switching to Linux witnessing poor community support. I see new questions everyday made by new members who leave after getting none to little help.

I would suggest if a new user posts a relatively "simple/harmless" question that remains unanswered for a week there should be some automated response referring them to external resources such that people don't abandon the project altogether.

3
  • 5
    The part that starts with "On the other hand" is not relevant to this thread and is diluting the argument that you make in the first part of your post. I suggest that you open a new question for that concern...
    – Levente
    Commented Dec 20, 2022 at 3:58
  • 2
    I down-voted your answer when you said 100% agree, but I cancelled the down-vote when it became obvious that you only 66% agree. You make some good points Commented Dec 20, 2022 at 4:42
  • I'm staring at your answer trying to understand how your bullet points do not apply to a typical human-generated answer. No luck as of yet: they do apply to the letter.
    – sigil
    Commented Dec 26, 2022 at 13:15
12

My straw-man position:

  • I am opposed to a flat-out ban on ChatGPT, because I do believe it can be a useful aid for improving answers (see below), but there do need to be rules (and common sense) around its use.

  • ChatGPT should not be used to write answers, but rather to improve answers.

  • A ChatGPT-based answer should never be posted by someone without subject matter expertise in the question being asked in the first place. Confirming that the answer works is not enough. If you are even using ChatGPT to assist you in crafting the answer, then you must have the ability to:

    • Confirm that the answer is correct and works
    • Fully understand the answer that you are posting
    • Correct any issues
    • Communicate the answer in your own words
    • Reply to follow-up questions/comments about your answer
    • Warn of corner cases or potential issues if needed
    • Cite any reliance you made on ChatGPT for the answer

That's in addition to the normal requirements for a good answer. As just one example from that help page, answering a commonly-asked-duplicate is not useful, regardless of whether your answer is AI-assisted or not.

That said, I firmly believe that ChatGPT can be a useful aid when answering questions (and performing other actions on the site):

  • If I'm going to post an answer of my own, I don't mind running the question through ChatGPT to see if there are additional things that I hadn't considered. Those suggestions that ChatGPT makes may or not be useful, but they at least make me think about ways that I might be able to improve my answer.

  • Many users here have the right level of expertise (or beyond), but may not have mastered writing in English. It's far easier for most people to read in a language than it is to write well in that same language. For this reason, ChatGPT can typically offer some great suggestions on improving English grammar for these users. And those users will likely be able to confirm that the meaning of their answer hasn't changed.

  • I have personally used ChatGPT on several occasions here on Ask Ubuntu to improve the grammar of some questions during Edits. The two that I've done:

    • Example 1: Before and after. IIRC, I'm fairly certain I did change one word from the ChatGPT suggestion -- I felt "the issue persists" was better than "this issue persists".

    • Example 2: Before and after: I definitely made some small tweaks to the ChatGPT suggestion to make it slightly more concise, but I can't recall exactly what they were.

    I do a lot of grammar/clarity edits (including, frequently, on my own mistakes). While I certainly could have cleaned up these two posts on my own, ChatGPT saved a lot of effort in these cases, and I believe the posts (and site) are better for it. In no way did ChatGPT suggest an answer here, or rely on any training other than proper English grammar.


Personal Anecdote

My wife has asked me on several occasions how to solve a Windows problem that she has. A certain application shifts its window entirely off screen when switching from her docked, three-monitor setup to her single-screen laptop.

My answer (from personal, long-past experience) is:

  • Select the application on the Windows task bar.
  • Press Alt+Space to show the "window menu", which appears on-screen.
  • Use the Down Arrow to navigate to the Move option.
  • Press the Enter key to activate it.
  • Hold down the Left Arrow to move it back on the screen.

She's always amazed that I know this off the top of my head. And let's be realistic -- there aren't many people who can give that answer from memory. It's a technique that I developed on my own, before the web and search engines even existed in the form they do today.

So I was curious how (or if) ChatGPT would handle that. I asked it something like "I have an application whose window is appearing off screen in Windows. How do I get it back?"

To my amazement, ChatGPT not only gave me my normal method, but also suggested some other (easier) things that I hadn't considered, like Win (Super)+Left might snap it back to the proper side of the screen.

If I was answering a question like this on a Stack Exchange site, then the extra information that ChatGPT provided would be useful in my answer as long as I could Confirm, Understand, Correct, Communicate, Reply, Warn, and Cite as mentioned above.

16
  • 3
    I think OP is not concerned about the language/readability enhancement-kind of AI usage. I think OP, just like Stackoverflow are concerned with cases where someone copies an entire question into ChatGPT, and posts the output here. That's the part where it can be wildly worthless (while looking absolutely convincing). At the same time, the part with the anecdote about Windows' window position and the new suggestion explored through ChatGPT is very interesting and thought-provoking.
    – Levente
    Commented Dec 19, 2022 at 22:33
  • 2
    @Levente I agree with the intent, but the suggestion from the original question is "Deletion of ChatGPT content, whether acknowledged as such or not". That reads to me like a "total ban", which goes too far, IMHO. That is the position of Stack Overflow at the moment. Any use of ChatGPT is prohibited, which is overboard IMHO. Commented Dec 19, 2022 at 22:36
  • 4
    O.k., I would not necessarily classify your example with the Windows-shortcut either as "posting ChatGPT content". In that example you are not crossposting output from ChatGPT to here, rather, you are augmenting your own knowledge with suggestions from the AI. But you vet and verify that knowledge, perhaps even rephrase it such that it is truly faithful to how you interpret it. The key takeaway is that — at least, for now — we want your ideas, not the machine's.
    – Levente
    Commented Dec 19, 2022 at 22:45
  • 2
    @Levente It's a fine line, but the grammar example I gave was almost verbatim edited by ChatGPT. I would call that "ChatGPT-generated content", even if the input was an answer already. My point is that any ban should examine the corner cases and allow common-sense usage of ChatGPT. It doesn't seem to me that the current policies do so. I'm proposing a policy here that (I hope) attempts to cover the "do's and don'ts" of using ChatGPT on Ask Ubuntu (and other sites). Commented Dec 19, 2022 at 22:50
  • 1
    You and I seem to be in full agreement on the proper usage. It's just that we need any policy to be clear on it as well. Commented Dec 19, 2022 at 22:51
  • 1
    @NotTheDr01ds Your use case sounds wonderful,however you are a dedicated AU user who posts well researched and carefully thought out answers. You respond to queries, you encourage new users and ideally you could be cloned! What percentage of users do you think would use ChatGPT in this way, and what percentage would instead unleash a tidal wave of ill considered, un-researched, crappy AI answers across AU in a frantic game of accumulating rep? I will go with 10% sensible use and 90% flag fodder...
    – andrew.46 Mod
    Commented Dec 19, 2022 at 23:17
  • 3
    @andrew.46 Right (and thanks for the kind words!) - But a "total ban" impacts both experienced users and inexperienced, whereas developing a policy and guidelines for usage allows for sensible usage, and still allows for banning the "flag fodder" for violating the policy. Win-win. Commented Dec 19, 2022 at 23:22
  • 2
    @andrew.46 As an example of a "bad actor", see this meta.superuser.com comment. I fully agree with banning that user, who was indiscriminately generating answers with ChatGPT. But I would like to see the tool used responsibly by users who are capable of doing so, without fear of being banned. Commented Dec 19, 2022 at 23:28
  • @NotTheDr01ds We are indeed living in interesting times :).
    – andrew.46 Mod
    Commented Dec 19, 2022 at 23:32
  • Well said @NotTheDr01ds, but it is obvious that you do not need ChatGPT. Commented Dec 20, 2022 at 4:28
  • 1
    I don't agree that it can't be used to write answers. I would say regardless of whether it creates the answer or "improves" an existing answer, at this time its responses need to be vetted by a human SME. I say this from the perspective that I've given it code problems described in conversational English, and it's produced very good responses. This is from first-pass or dual-pass attempts, not where someone keeps providing instructions over and over to refine an answer. Point is, ChatGPT can be good at helping get through code writer's block, but I would not rely on it for the final answer. Commented Dec 27, 2022 at 15:32
  • 2
    @MrPotatoHead I do agree with that overall, but at this stage it might be safer to leave the wording as "should not be used to write ..." to prevent the misuse we are starting to see. For instance, one user posted 13 answers in the span of two hours yesterday; clearly not vetting them. However if, as we both agree should happen, a SME vets the output and explains that (and likely how) they did so, then I think they (can) ultimately transform it into their answer. Commented Dec 28, 2022 at 14:23
  • Great points @NotTheDr01ds. Agreed. Commented Dec 31, 2022 at 15:10
  • 3
    I really think this is completely besides the point. Of course the kind of use of chatGPT (which by the way is not an AI but is just a language model stringing words together with no attempt to actually understand them) is fine. However, the kind of use you describe (using it to enhance already good answers or improve writing) wouldn't even be noticeable. We can safely ignore that since it isn't an issue and cannot really be detected. The point is about people who do not write their answers and instead present the bot's work as their own.
    – terdon
    Commented Jan 2, 2023 at 18:20
  • 1
    If we're going to "ban" ChatGPT here, it needs to be clear where the "line" is in terms of what is acceptable and what is not. Commented Jan 2, 2023 at 21:46
11

The problem seems more about a large volume of garbage answers.

From https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned :

"The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure."

The current problem is the potential for a blizzard of garbage answers that overload the AskUbuntu cadre of great volunteers. That the answers are written by AI is interesting, but not the main problem. The blizzard is the problem.

The risk to this community is loss of credibility -- that noise drowns out signal and that AskUbuntu's reputation changes from "great gurus" to "garbage farm."

We face a potential flood of malicious vandalism. If that flood materializes, our Moderators will need help from us to spot and flag garbage, and to help identify key garbage posters.

When you identify a AI-written garbage answer:

  • Downvote it.
  • Flag it ('not an answer' for jibberish, or 'in need of moderator intervention' for flat-wrong or damaging).
  • Click on the user and see if they are posting lots of similar garbage. If so, you just discovered a user that needs immediate action. Jump into chat and let the moderators know about that user.

Our goal should be to get these garbage-posters answer-banned --or perhaps completely user-banned-- and all their answers deleted. Rapidly!

It's up to the volunteer community to speed up the process and help the moderators by identifying obvious perpetrators.

It's up to the moderators to determine if their numbers are adequate to weather the storm (should it arrive), if their workflow to identify and handle garbage-posters rapidly is adequate, how they prefer communication of frequent-garbage-posters discovered by the community, and to communicate how else non-moderators can best help.

2
  • In case you aren't familiar with it, we usually use Raiders of the Lost Downboat (link to Meta post) to assist with content moderation issues. There are several bots running in the room already to identify potentially troublesome content, and a few mods (but not all -- Would love to see more!) that visit on a regular basis. If you spot a potential problem with ChatGPT content (or any other content) feel free to hop in and discuss. Commented Dec 20, 2022 at 15:32
  • @NotTheDr01ds edited! Thanks for clarifying for everybody.
    – user535733
    Commented Dec 20, 2022 at 16:58
2

As is commonly the case, I probably got carried away in my other answer. Let's see if I can propose a simple policy for ChatGPT usage on Ask Ubuntu that could be expressed in the Help:

ChatGPT (and general AI tool) usage policy on Ask Ubuntu

  • Users are not allowed to post answers directly generated by ChatGPT or other similar AI tools. Offenders may be warned, and repeat or frequent postings may result in a ban.

  • If you are able to answer a question on your own, you may use AI tools to assist in wording or to suggest improvements to your answer. If you choose to do so, you should, to the best of your ability:

    • Confirm that the answer is correct and works
    • Understand the answer that you are posting
    • Correct any issues that could reasonably be identified
    • Communicate the answer in your own words
    • Reply to follow-up questions/comments about your answer
    • Warn of corner cases or potential issues if needed
    • Cite any reliance on third-party sources such as ChatGPT for the answer
  • AI tools such as ChatGPT may be used for accessibility purposes such as voice-to-text transcription.

  • AI tools may be used to suggest grammar improvements for your own posts and to make suggested edits to posts with poor grammar. However, as with any edit, take care not to change the meaning of another user's post.

Note: ChatGPT was used to suggest slight improvements in readability and conciseness to this policy. Some suggestions were used, while others were rejected

8
  • Question is, whether it is practically worth it or not. Possibly, the mods will be very busy. It remains to be seen if they will have enough capacity to tiptoe around benevolent usage. Also remember, they are unpaid volunteers, with a demanding task of upholding quality. And while readability / impeccable grammar is welcome, it's not the site's top priority. The site's top priority is the accuracy and reliability of the answers. The convenience of producing the posts is also important (UX, in a way?), but only until the practice is not interfering with the top priority.
    – Levente
    Commented Dec 20, 2022 at 0:11
  • I guess we could try this out and then see how sustainable it is?
    – Levente
    Commented Dec 20, 2022 at 0:15
  • 1
    @Levente But doesn't a total ban on it have the same impact on Mod (and community) policing? Commented Dec 20, 2022 at 1:21
  • If the policy was total ban, then they could make an attempt at using automation for it. Automation, if worked, could be a game changer.
    – Levente
    Commented Dec 20, 2022 at 2:00
  • 1
    "ChatGPT may be used to suggest grammar improvements" please do not encourage people to waste resources. Commented Dec 20, 2022 at 2:36
  • @Levente: "The site's top priority is the accuracy and reliability of the answers?" but I see that "Needs details or clarity" seems to be one of the main reasons for closing a question on the Close Votes queue. How would automation be applied? Commented Dec 20, 2022 at 7:19
  • @UtkarshChandraSrivastava I would say that's one of the more benign and yet beneficial uses. Did you read the two questions from my other answer. I'm not saying that it should be used for minor improvements, but if there's a "wall of text" with limited (or bad) punctuation like in one of the questions I modified, then ChatGPT is a timesaver (again, requiring human verification of accuracy) in improving these posts. Grammar issues may be due to language barriers and accessibility issues. ChatGPT can help users overcome those. Commented Dec 20, 2022 at 7:42
  • 1
    @Levente "If the policy was total ban, then they could make an attempt at using automation for it." I see it the other way around -- The only detectable answers from ChatGPT would be those that were copy/paste/no-verify in the first place. Commented Dec 20, 2022 at 7:45
0

I've worked with ChatGPT yesterday again. Asked it about flask and plotly. The answers look good, but they don't work. It is completely ok to use it privately, but I do not want to investigate the "mostly correct but not working" answers in this forum. So I'm opting for a ban until the company worked out its flaws.

To sum it up: ChatGPT needs to be banned until it works

-1

NOTE: find updates at the bottom of the post.


Having learned the views of others, I'd like to offer some degree of summary in response, and to deliver some new suggestions.

How to answer to the future, today?

I acknowledge the points of those who claim that AI is the future, and that denying it is futile — foolish even. I also sense the intent of a warning that advises us not to make similar mistakes to those who initially dismissed major, history-shaping technological advancements as unimpactful fads (only to be proven wrong by history shortly afterwards).

Yet, I argue, in itself, this insight is not a sufficient foundation for us to unconditionally embrace "AI produced content" on our website today. Our (e.g. those who engage with meta.askubuntu) main goal here is to be the custodians of a collection of reliable answers about Ubuntu.

The current state of available AI technology is not adequate to meet our requirement for factually consistently correct — and therefore safe-to-apply — information.

ChatGPT seems like an early proof of concept whose primary mission was to produce passable contributions to random conversations.

[ Aside: I have seen arguments that it's not even an AI: it's merely a machine learning model oriented at language. ]

Until an AI or equivalent technology is not capable of installing Ubuntu in a virtual machine and verifying the effects of its assertions, all it can do is to collect information from other, already existing sources — quite possibly including unverified ones — recombine it with unreliable degree of accuracy, and pass it on repackaged into unique linguistic constructs. This is no way to produce reliable answers for our Ubuntu-related questions.

A further, massive deal-breaker — as some of us had already seen — is when the machine grants itself creative freedoms in recombining and reshuffling software configuration options, regardless of whether these inventions correspond to existing specifications or not.

On these grounds, I don't see the place for the term "luddite" in our argument, for when someone expresses unwillingness to work with information of such low reliability. I believe no one is against technology here. We are against a barrage of useless lies / fiction that the current iteration of ChatGPT is capable of producing under the guise of reliable facts.

Seeing the dialogue in this thread, I am convinced that as soon as an AI will be able to produce truly and consistently reliable information about Ubuntu-related challenges, we will re-evaluate our position. (Impactfully, at that point, we also will have to face the question: what mission would keep this community (and numerous other online collaborations) working together from that point on.)

In the meanwhile, our burden is to keep this site free from misleading, potentially dangerous garbage.

How to maintain a reliable stock of information on AskUbuntu?

We are seeing in this thread that several people:

  • are against a blanket ban, for various motivations
  • are in agreement about the prediction that with the rapid evolution and refinement of the AI / ML technology, enforcing a blanket ban might turn out to be technically challenging

The following suggestions aim to acknowledge these positions & considerations, and suggest measures for a modus operandi where we have to live together with a constant influx of AI-produced content.

I offer the following suggestions; feel free to cherry-pick the most feasible ideas:

Inform proactively: announce a policy

We need to communicate our issues regarding low quality AI posts: this needs to be mentioned in the onboarding process and it needs its dedicated section in the Code Of Conduct.

Additionally, we could consider putting up small, unobtrusive banners where relevant (maybe accompanying the question- and answer submission forms).

Demand disclosure of origin for AI-generated posts

(Updated content)

As I have learned in the meanwhile, it is a general policy of StackExchange to demand disclosure of such material:

If it wasn't created by you, attribution is always required here.

Rely on community-delegated moderation

There is a threat that the volume of low quality posts could overwhelm our volunteer moderators. To relieve them, we should develop measures that could maintain order without needing explicit moderator attention for each individual case. This could be implemented through flagging, whose sufficient numbers could automatically lead to the removal of posts (comparable to how the "spam" flag works today).

Establish a new flag dedicated to the issue

Could be content-oriented:

Low effort answer with inadequate content

Or behavior-oriented:

For lack of a better term currently, until we find something better, I would suggest "meddling".

From dictionary.com:

verb meddling
to involve oneself in a matter without right or invitation; interfere officiously and unwantedly

We already have a behavior-oriented flag for spamming.

Now we could think about experimenting with a new, tailor-made flag for this new challenge:

[ Act of | A product of ] meddling

The poster appears to disseminate cheaply sourced third-party content while failing to demonstrate an ability to verify the adequacy of the information within.

Multiple posts being removed through flagging by the community could invite mod attention, where a temporary suspension could be used to discourage the unwanted activity.

Re-evaluate as necessary

Perhaps we don't get it right for the first try. Let's see how it goes, and, if under pressure, update the policies as necessity dictates.

At the end of the day, the ultimate goal is that AskUbuntu remains a reputable source of reliable information. We should maintain only such policies that do not endanger this primary mission.


UPDATE:

I maintain my suggestion to address the problematic usage of ChatGPT by a tailor-made policy and corresponding countermeasure.

This time, I suggest contemplating the usefulness of the terms "impostor" and "charlatan".

From dictionary.com:

noun impostor
a person who practices deception under an assumed character, identity, or name

"British Dictionary definitions for impostor":

a person who deceives others, esp by assuming a false identity; charlatan

From dictionary.com:

noun charlatan
a person who pretends or claims to have more knowledge or skill than he or she possesses; quack.

I opine, as long as the poster does not disclose up front that the core "value" of the post was generated through ChatGPT, their behavior amounts to acting both as an impostor and charlatan.

However, let's recognize that ChatGPT itself is acting as a charlatan. I interpret, that's at the core of our ban against it.

In that sense, could a single additional flag aimed at "charlatanism" cover the case?

I admit that I feel both "charlatan" and "impostor" are strong words, and carry heavy emotional weight on top of the intended correct semantical identification of the behaviors in question. Also, introducing them as-is could carry the risk of detrimentally impacting the community culture.

Nevertheless, I feel that the dictionary description of these words match the case we are experiencing.


UPDATE 2:

As I explored earlier, ChatGPT is not (currently) in a position of verifying the adequacy of its assertions through testing the validity of those on a live Ubuntu system (e.g. running one in a virtual machine).

Out of this circumstance, attempting to compile valid information — short of quoting snippets from successfully version-matched(!) documentation — is akin to how one sits in the casino in front of a slot machine with the spinning fruit symbols. Sometimes things end up aligning succesfully, oftentimes however not.

I'd like to point out that the above is not analogous to the contemporary machine-learning practice of trial and error, because trial and error — it seems to me — implicitly involves the verification/evaluation of the results, and the iterative development of the conclusion. That's fundamentally missing from ChatGPT's capabilities.

Therefore we could create a flag along the lines of:

Information produced through means of gambling.

Or, perhaps:

Information produced through means not accepted by this community.

-1

I'm sure it is written somewhere but I cannot find it… I had an answer deleted as I cited my discussion with ChatGPT which helped me solve a problem I did not find the answer here or elsewhere. I wrote my question and pasted the dialog because I was thinking it would be very useful for others. And it seemed fair to acknowledge the source. I did not know the ban at this time. So, if this kind of thing happen again, am I authorized to write down myself an answer using the knowledge I gained from ChatGPT, knowledge that I empirically checked on my own computer?

Update After reading the first comment, I'd like to add my grain of salt. I consider myself that if a tool like ChatGPT gives a correct and hard to find elsewhere answer AND that the user verify himself or herself the correctness of the answer AND that he or she rewrite it from scratch with his or her newly gained knowledge, then he or she should be allowed to do so. BUT for fairness, he or she should be allowed to acknowledge the fact that the original answer came from the AI tool. Maybe a sentence like "derived from a human-checked AI generated answer" could be a good addition to this kind of answers. If something like that is not allowed, I think that there will be a lot of hidden AI generated content modified just to escape the ban.

3
  • 1
    It would be best in the current climate to avoid the use of ChatGPT entirely on AU and I would include referencing ChatGPT itself in this. The wording of the announcement was intentionally broad to cover all usage: "As of February 2, 2023, there is a permanent and complete ban on the use of ChatGPT and AI generated content on Ask Ubuntu.". For reference: meta.askubuntu.com/q/20209/57576 Of course AU is not the only place struggling with this new technology: futureoflife.org/open-letter/pause-giant-ai-experiments Interesting times indeed :)
    – andrew.46 Mod
    Commented Apr 13, 2023 at 4:29
  • There will always be good intentions for almost anything "not allowed". However, when the bad outweighs the good by a large margin, it's easier to just ban an issue outright until (if ever) a proper plan can be achieved. In the case of ChatGPT, the amount of spam and reputation farming on the SE sites seems to be astronomical. Everyone helping maintain the sites is a volunteer (talking about moderators and users). Having to spend time fact-checking if a ChatGPT answer is correct or not because most users don't isn't something the community is willing to do. Otherwise, overall quality will drop
    – Dan
    Commented Apr 14, 2023 at 12:22
  • I understand and I really don't realize the impact it had on this community of maintainers but I'm absolutely sure that some people will, with good or bad intentions, produce hidden AI generated data. A way to control it is to allow it with safety barriers. Sorry for this poor metaphor but canabis is strictly forbidden in France and it is the European country with the most users. When it is tolerated, theres is less users.
    – Kleag
    Commented Apr 14, 2023 at 16:38
-4

Are we a bunch of Luddites?

Why not just ban spell checkers and any other computer aided technology?

ChatGPT is the future, we should practice it and learn to use it correctly, not hide from it.

Perhaps banning people that misuse it may be okay, but not those who use it for the better and to extend technology..

People can generate lots of good looking answers, that are not correct, without the need for a bot. I've seen it with my own eyes

In a year or so there may not be anyway to even detect if it was used.

Story

Back in first year Engineering, ~1974, I had a physics prof who made us use slide rules because she thought calculators were just a fad. At the end of the year we were pretty good with slide rules but most of us never used them again. Meanwhile the guys with more progressive profs were way ahead of us in the use of calculators.

19
  • 1
    I think I agree here - also, how would you ever know if a good and correct answer was run through ChatGPT? Commented Dec 19, 2022 at 12:35
  • 19
    I think you and @ArturMeinild are both missing what's actually happening. The people and posts we're talking of banning aren't those who're using ChatGPT for spell-checking or polishing answers they wrote themselves. They're posting the questions to ChatGPT and copy-pasting the generated answer wholesale here without any attempt whatsoever to verify if the answer is correct. And while users certainly generate OK-looking BS on their own, ChatGPT enables to generate this BS fast enough to hit any and all rate limits. And worse, ChatGPT is quite good at generating correct-looking BS
    – muru
    Commented Dec 19, 2022 at 13:06
  • 1
    I'm certain there has already been one such case on Unix & Linux and possibly one more going on right now.
    – muru
    Commented Dec 19, 2022 at 13:07
  • @muru and how do you verify those answers have been generated with ChatGPT? Commented Dec 19, 2022 at 13:08
  • 1
    @ArturMeinild that is the tricky part, admittedly. you can post the same question and see if it generates similar correct-looking but actually incorrect BS. And the ones that make me suspicious of being generated so far (across users and sites) have this odd pattern of repeating the same thing in different words multiple times in the same answer.
    – muru
    Commented Dec 19, 2022 at 13:14
  • 8
    However, problems of detection don't and shouldn't prevent us from enacting a policy to ban such zero-effort spam.
    – muru
    Commented Dec 19, 2022 at 13:15
  • I think you need a bot to catch a bot, (as the old saying goes). Commented Dec 19, 2022 at 14:12
  • @muru: But their punishment will be just as severe. Commented Dec 19, 2022 at 14:15
  • @C.S.Cameron whose? As severe as what?
    – muru
    Commented Dec 19, 2022 at 14:29
  • @muru: The punishment of those that use ChatGPT for good and those that use it for evil. As severe as the punishments that andrew.46 refers to. Commented Dec 20, 2022 at 4:48
  • 1
    @C.S.Cameron I would not use the term 'punishment' as such. Suspension is rather a mandatory period of reflection with the period of suspension time proportionate to the severity of the infraction. Mod email is a simple notification that all is not well...
    – andrew.46 Mod
    Commented Dec 20, 2022 at 5:08
  • @andrew.46: My writing is probably enough proof that some of us need something like ChatGPT. I guess I just don't know the difference between punishment and a Mod's simple notification that all is not well, especially if I get a 7 day Suspension for subsequent occurrence. 8<) Commented Dec 20, 2022 at 5:35
  • 4
    @C.S.Cameron Side note: Interestingly enough 'Luddites' was not always a pejorative term. The original Luddites were simply trying to protect their way of life and their livelihoods as the Industrial Revolution swept all before it. The Industrial Revolution was always a double edged sword: so too with ChatGPT which has the potential to destroy some things while at the same time improving other things...
    – andrew.46 Mod
    Commented Dec 27, 2022 at 3:47
  • 1
    @andrew.46: I don't consider "Luddite" a pejorative term in this day and age. Especially when it comes to Nuclear Weapons, Bitcoin, Designer Drugs, Junk Food, Ransomware, Surveillance, Cell Phones etc, etc. I think "Luddite" is only a bad word to those who think caution of the unknown is evil. Commented Dec 27, 2022 at 5:02
  • 1
    @C.S.Cameron I appreciate the indication of your awareness regarding all those issues that you listed, and I agree on that point. However, looking at your answer above, I can reassure you that it does come off with a clearly pejorative overtone. I for one believe that when you wrote that, you were in that mood, and argued accordingly. Of course, with time and feedback, your opinion and argument got refined.
    – Levente
    Commented Dec 27, 2022 at 17:01

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .