Revision as of 17:21, 25 August 2007 editAdrian M. H. (talk | contribs)9,272 editsm →Dealing with templates in user signatures: typo← Previous edit | Latest revision as of 12:38, 24 January 2025 edit undoConsarn (talk | contribs)Extended confirmed users5,357 edits →RfC: Amending ATD-R: does this count as a reply?Tag: 2017 wikitext editor | ||
Line 1: | Line 1: | ||
<noinclude>{{ |
<noinclude>{{Short description|Page for discussing policies and guidelines}}{{Redirect|WP:VPP|proposals|Misplaced Pages:Village pump (proposals)}}{{village pump page header|Policy|alpha=yes|The '''policy''' section of the ] is intended for discussions about already-proposed ], as well as changes to existing ones. Discussions often begin on other pages and are subsequently moved or referenced here to ensure greater visibility and broader participation. | ||
*If you wish to propose something ''new'' that is ''not'' a policy or guideline, use ]. Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page. | |||
Please see ''']''' for a list of frequent proposals and the responses to them. | |||
* For questions about how to apply existing policies or guidelines, refer to one of the many ]. | |||
|]}} | |||
* If you want to inquire about what the policy is on a specific topic, visit the ] or the ]. | |||
* This is '''not the place to resolve disputes''' regarding the implementation of policies. For such cases, consult ]. | |||
__NEWSECTIONLINK__ | |||
* For proposals for new or amended speedy deletion criteria, use ]. | |||
<br clear="all" /> | |||
Please see ''']''' for a list of frequently rejected or ignored proposals. Discussions are automatically archived after two weeks of inactivity.<!-- | |||
-->|WP:VPP|WP:VPPOL}}__NEWSECTIONLINK__ | |||
] | |||
{{centralized discussion|compact=yes}} | |||
] | |||
__TOC__<div id="below_toc"></div> | |||
] | |||
] | ] | ||
] | |||
] | |||
] | |||
]</noinclude> | |||
] | ] | ||
] | |||
{{User:MiszaBot/config | |||
|archiveheader = {{Misplaced Pages:Village pump/Archive header}} | |||
|maxarchivesize = 400K | |||
|counter = 199 | |||
|algo = old(10d) | |||
|archive = Misplaced Pages:Village pump (policy)/Archive %(counter)d | |||
}}</noinclude> | |||
{{clear}} | |||
== RfC: Voluntary RfA after resignation == | |||
{{discussion top|1=There is clear consensus that participants in this discussion wish to retain the "Option 2" status quo. We're past 30 days of discussion and there's not much traffic on the discussion now. It's unlikely the consensus would suddenly shift with additional discussion. --] (]) 18:29, 16 January 2025 (UTC)}} | |||
<!-- ] 22:01, 19 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1737324070}} | |||
Should ] be amended to: | |||
<!-- BEGIN WERDNABOT ARCHIVAL CODE --><!-- This page is automatically archived by Werdnabot-->{{User:Werdnabot/Archiver/Linkhere}} <!--This is an empty template, but transcluding it counts as a link, meaning Werdnabot is directed to this page - DO NOT SUBST IT --><!--Werdnabot-Archive Age-5 DoUnreplied-Yes Target-Misplaced Pages:Village pump (policy)/Archive--><!--END WERDNABOT ARCHIVAL CODE--><span id="below_toc"/> | |||
* '''Option 1'''{{snd}}Require former administrators to request restoration of their tools at the ] (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions). | |||
* '''Option 2'''{{snd}}<s>Clarify</s> <ins>Maintain the status quo</ins> that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary ] (RfA). | |||
* '''Option 3'''{{snd}}Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted. | |||
'''Background''': This issue arose in one ] and is currently being discussed in an ]. ] (]/]) 21:14, 15 December 2024 (UTC)<br /> | |||
== Popular culture references == | |||
'''Note''': There is an ongoing related discussion at {{slink|Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial}}.<br /> | |||
'''Note''': Option 2 was modified around 22:08, 15 December 2024 (UTC). | |||
'''Note''': Added option 3. ] (] • she/her) 22:12, 15 December 2024 (UTC) | |||
:{{block indent|em=1.6|1=<small>Notified: ], ], ], ], ]. ] (]/]) 21:19, 15 December 2024 (UTC)</small>}}<!-- Template:Notified --> | |||
*'''2''' per ]. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. ] ] 21:22, 15 December 2024 (UTC) | |||
*:Also fine with 3 ] ] 22:23, 15 December 2024 (UTC) | |||
* There is ongoing discussion about this at ]. ] (]) 21:24, 15 December 2024 (UTC) | |||
** '''2''', after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm ('''RRfA''')". ] (]) 14:49, 16 December 2024 (UTC) | |||
**:{{tq|best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"}} - I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. ] ] 11:05, 17 December 2024 (UTC) | |||
**::I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. ] (]) 14:23, 17 December 2024 (UTC) | |||
**::I think this would be a great idea if it differentiated against recall RfAs. ] (]) 18:37, 17 December 2024 (UTC) | |||
**:::If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. ] (]) 21:00, 17 December 2024 (UTC) | |||
* '''1''' ] ] 21:25, 15 December 2024 (UTC) | |||
*'''2''' I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. ] ] 21:27, 15 December 2024 (UTC) | |||
*'''2''' Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. ] ] 21:32, 15 December 2024 (UTC) | |||
*Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. ] (]) 21:38, 15 December 2024 (UTC) | |||
*'''2''' per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. ] (]) 21:42, 15 December 2024 (UTC) | |||
*:When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, ] (]) 21:44, 15 December 2024 (UTC) | |||
*::I've started that discussion as a subsection to the linked VPI discussion. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''1''' <ins>or '''3'''</ins>. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and ]). –] <small>(])</small> 21:44, 15 December 2024 (UTC) | |||
*:Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. ''If'' (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, ''then'' we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to ] (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. ] (]) 22:11, 15 December 2024 (UTC) | |||
*:Option 3 looks like a good compromise. I'd support that too. –] <small>(])</small> 22:15, 15 December 2024 (UTC) | |||
*:I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''2'''. We should not make it ''more difficult'' for administrators to be held accountable for their actions in the way they please. ]<sub>]<sub>]</sub></sub> (]/]) 22:00, 15 December 2024 (UTC) | |||
* Added '''option 3''' above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. ] (] • she/her) 22:11, 15 December 2024 (UTC) | |||
*'''2''' – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — ] ⚓ ] 22:12, 15 December 2024 (UTC) | |||
*:*I would also support option '''3''' if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — ] ⚓ ] 02:05, 16 December 2024 (UTC) | |||
*'''Option 3''' per leek. ] (]/]) 22:16, 15 December 2024 (UTC) | |||
*:A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. ] (]/]) 23:24, 16 December 2024 (UTC) | |||
*'''2''' as per {{noping|JJPMaster}}. Regards, --] (]) 22:20, 15 December 2024 (UTC) | |||
*'''Option 2''' (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that ] is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – ] 22:22, 15 December 2024 (UTC) | |||
*{{ec}} I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a ] for a year. —] 22:25, 15 December 2024 (UTC) | |||
*:@] hostile to what? ] (]) 22:26, 15 December 2024 (UTC) | |||
*'''2, distant second preference 3'''. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. ] and ] are always there. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 23:31, 15 December 2024 (UTC) | |||
* 1 would be my preference, but I don't think we need a specific rule for this. -- ] (]) 23:36, 15 December 2024 (UTC) | |||
*'''Option 1'''. <s>No second preference between 2 or 3.</s> As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. ] (]) 01:19, 16 December 2024 (UTC) | |||
*:After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. ] (]) 07:36, 16 December 2024 (UTC) | |||
*'''Option 2''', i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. ] (]) 01:24, 16 December 2024 (UTC) | |||
*'''Option 2''' nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. ] (]) 01:49, 16 December 2024 (UTC) | |||
* '''1 or 3''' per Novem Linguae. <span style="padding:2px 5px;border-radius:5px;font-family:Arial black;white-space:nowrap;vertical-align:-1px">] <span style=color:red>F</span> ]</span> 02:35, 16 December 2024 (UTC) | |||
*'''Option 3''': Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in ]. ] (]) 03:42, 16 December 2024 (UTC) | |||
*:Update: Option 2 seems to be the consensus and I also would be fine with that. ] (]) 18:10, 19 December 2024 (UTC) | |||
*'''Option 3''' per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. ] (]) 04:25, 16 December 2024 (UTC) | |||
*As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. <b>]]</b> (] • he/they) 04:31, 16 December 2024 (UTC) | |||
*:I've seen this assertion made multiple times now that {{tpq| requires a lot of time from the community}}, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? ] (]) 04:58, 16 December 2024 (UTC) | |||
*::I think there are a number of factors and people are summing it up as "time-wasting" or similar: | |||
*::# BN Is designed for this exact scenario. It's also clearly a less contentious process. | |||
*::# Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given. | |||
*::# If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks. | |||
*::# The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future. | |||
*::] (]) 07:31, 16 December 2024 (UTC) | |||
*::Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even ''realise that the RFA is not worth their time'', they have to: | |||
*::* Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words) | |||
*::* Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud" | |||
*::* Read through the comments and votes to see if any issues have been brought up (another ~1000 words) | |||
*::* None have | |||
*::* Realise your input is not necessary and this could have been done at BN | |||
*::This process will be repeated by hundreds of editors over the course of a week. ] ] 08:07, 16 December 2024 (UTC) | |||
*:::That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. ] (]) 13:15, 16 December 2024 (UTC) | |||
*::::Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. ] ] 11:21, 17 December 2024 (UTC) | |||
*:::::They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. ] (]) 18:47, 17 December 2024 (UTC) | |||
*::::::Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds ] ] 23:08, 17 December 2024 (UTC) | |||
*:::::::Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. ] (]) 12:52, 18 December 2024 (UTC) | |||
*::@] let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, ] (]) 22:49, 16 December 2024 (UTC) | |||
*:::Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have ''their'' time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. ] (]) 23:39, 16 December 2024 (UTC) | |||
*::::I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, ] (]) 03:34, 17 December 2024 (UTC) | |||
*:::::I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? ] (]) 09:08, 17 December 2024 (UTC) | |||
*'''Option 2''' I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. ] <sup>(]) </sup> 06:23, 16 December 2024 (UTC) | |||
*'''Option 2'''. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. ] </span>]] 07:24, 16 December 2024 (UTC) | |||
*Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. ] (]) 08:46, 16 December 2024 (UTC) | |||
* '''Option 1''' as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the ] ] 08:59, 16 December 2024 (UTC) | |||
* '''Option 2''', though I'd be for '''option 3''' too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. <span>♠] ]</span>♠ 09:13, 16 December 2024 (UTC) | |||
*'''Option 2''' because ] is well-established policy. Read ], which says quite clearly, {{tpq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is ''specifically permitted by policy''. It is bewildering to me. ] (]) 09:56, 16 December 2024 (UTC) | |||
*'''Option 2 & 3''' I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. ] (]) | :) | he/him | 10:45, 16 December 2024 (UTC) | |||
*'''Option 2''', for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. ] (]) 12:08, 16 December 2024 (UTC) | |||
*'''Option 1''' I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – ] <small>(])</small> 12:11, 16 December 2024 (UTC) | |||
*'''Option 2''' I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) {{pb}} Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ] (]) 12:33, 16 December 2024 (UTC) | |||
*'''Option 2''' It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on ] regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —] <span title="Canadian!" style="color:red">🍁</span> (] · ]) 15:24, 16 December 2024 (UTC) | |||
*'''Option 1''' - Option 2 is a waste of community time. - ] (]) 15:30, 16 December 2024 (UTC) | |||
*:Why? ] (]) 15:35, 16 December 2024 (UTC) | |||
*'''2''' is fine. '''Strong oppose''' to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —] (]) 16:59, 16 December 2024 (UTC) | |||
*'''2''', with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. ] (]) 17:56, 16 December 2024 (UTC) | |||
*'''2''' Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — ] <sup>]</sup> 18:03, 16 December 2024 (UTC) | |||
*'''Option 2''' If contributing to such an RFA is a waste of your time, just don't participate. ] (]) 18:43, 16 December 2024 (UTC) | |||
*:No individual is wasting their time participating. Instead the person asking for a re-rfa is ''using'' tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, ] (]) 22:55, 16 December 2024 (UTC) | |||
*::I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. ] (]) 23:15, 16 December 2024 (UTC) | |||
*:::I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. ] (]) 23:30, 16 December 2024 (UTC) | |||
*'''2'''. Maintain the ''status quo''. And stop worrying about a trivial non-problem. --] (]) 22:57, 16 December 2024 (UTC) | |||
*'''2'''. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it '''have''' to be in a recall petition? --] (]) 23:44, 16 December 2024 (UTC) | |||
*:What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, ] (]) 03:38, 17 December 2024 (UTC) | |||
*:Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --] (]) 23:58, 18 December 2024 (UTC) | |||
*'''No hard rule, but we should socially discourage confirmation RfAs''' There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?{{pb}}A lot of people have thrown around ] in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. ] <sup>]</sup>] 03:29, 17 December 2024 (UTC) | |||
*:And for what its worth, support '''Option 3''' because I'm generally a fan of putting more tools in people's toolboxes. ] <sup>]</sup>] 03:36, 17 December 2024 (UTC) | |||
*:{{tpq|In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.}} Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, ''is'' a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator. | |||
*:The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. ] (]) 09:16, 17 December 2024 (UTC) | |||
*:WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —] (]) 09:53, 17 December 2024 (UTC) | |||
*:@] Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN? | |||
*:As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. ] (]) 18:41, 19 December 2024 (UTC) | |||
*::Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. ] <sup>]</sup>] 16:49, 23 December 2024 (UTC) | |||
* '''Option 1'''. Will prevent the unnecessary drama trend we are seeing in the recent. – ] (]) 07:18, 17 December 2024 (UTC) | |||
* '''Option 2''' if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ] (]) 15:08, 17 December 2024 (UTC) | |||
* '''Option 3''' (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is <em>real and substantial</em> concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). ] (]) 17:26, 17 December 2024 (UTC) | |||
* '''Option 3 or Option 2'''. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. ] (]) 17:35, 17 December 2024 (UTC) | |||
*'''Option 2''': Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- ] <sup>] · ]</sup> 18:48, 17 December 2024 (UTC) | |||
*:I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. ] (]) 20:45, 17 December 2024 (UTC) | |||
*::Who are you disagreeing with? This RfC is about voluntary RRfA. -- ] <sup>] · ]</sup> 20:59, 17 December 2024 (UTC) | |||
*:::I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). ] (]) 12:56, 18 December 2024 (UTC) | |||
*::One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. ] (]) 21:06, 17 December 2024 (UTC) | |||
*:::I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.<br />There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into ] without good reason. ] (]) 12:57, 18 December 2024 (UTC) | |||
*::::Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. ] (]) 14:56, 18 December 2024 (UTC) | |||
*:::::Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. ] (]) 15:37, 18 December 2024 (UTC) | |||
*::::::Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. ] (]) 15:49, 18 December 2024 (UTC) | |||
*:::::::I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. ] (]) 16:29, 18 December 2024 (UTC) | |||
*'''Option 2'''. In fact, I'm inclined to ''encourage'' an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --] (]) 21:45, 17 December 2024 (UTC) | |||
*'''Option 2''' ] has said "{{tq|Former administrators may seek reinstatement of their privileges through RfA...}}" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. ]🐉(]) 22:14, 17 December 2024 (UTC) | |||
*'''Option 2'''. People who think it’s not a good use of their time always have the option to scroll past. ] (]) 01:41, 18 December 2024 (UTC) | |||
* '''2''' - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. ] <sup>]</sup> 14:00, 18 December 2024 (UTC) | |||
*'''Option 1'''. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. ] (]) 02:34, 19 December 2024 (UTC) | |||
*'''Option 2'''. If people want to choose to go through an RFA, who are we to stop them? ] (]) 10:25, 19 December 2024 (UTC) | |||
*'''Option 2''' (status quo/no changes) per ]. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. ] (<sup>]</sup>/<sub>]</sub>) 19:34, 19 December 2024 (UTC) | |||
*'''Option 2''' I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. ] (]) 02:28, 20 December 2024 (UTC) | |||
* '''Option 1''', per ], ], ], ], and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 04:28, 22 December 2024 (UTC) | |||
*:Except none of the re-RFAs complained about have been {{tpq|RfA invoked as a "Show me some love!" abuse of process}}, you're arguing against a strawman. ] (]) 11:41, 22 December 2024 (UTC) | |||
*::It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The {{em|actual effect}} (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:49, 23 December 2024 (UTC) | |||
*:::{{tqb|or because you've somehow confused yourself into think it needs to be done}} I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. ] (]) 15:37, 23 December 2024 (UTC) | |||
*'''Option 2'''. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins ''already'' choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates ], because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.{{pb}}On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general ''is'' a violation of ]. – ] (]) 14:46, 22 December 2024 (UTC) | |||
*:But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an ], is a feature or a bug. ] <sup>]</sup>] 23:01, 22 December 2024 (UTC) | |||
*::There have been lots of re-RFAs, historically. There were more common in the 2000s. ] in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. ] back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, ] (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial ]. That ended successful re-RFAS until 2019's ], which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the yet no-consensus ] in 2014 and the ] case in 2015, but those are very different than what we're talking about today. ] (]) 00:01, 23 December 2024 (UTC) | |||
*:::To add on to that, ] was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – ] (]) 16:34, 23 December 2024 (UTC) | |||
*::::Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. ] <sup>]</sup>] 16:47, 23 December 2024 (UTC) | |||
*'''Option 2 or 3''' per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. ] (]) 15:44, 23 December 2024 (UTC) | |||
*'''Option 2''' but for largely the reasons presented by CaptainEek. ''']''' (<small>aka</small> ] '''·''' ] '''·''' ]) 21:58, 23 December 2024 (UTC) | |||
*'''Option 2 (fine with better labeling)''' These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. ] (]) 22:36, 23 December 2024 (UTC) | |||
*'''Option 1''' because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. ] (]) 23:05, 23 December 2024 (UTC) | |||
*:If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. ] (]) 23:28, 23 December 2024 (UTC) | |||
*'''Option 2''' per QoH (or me? who knows...) ] • ] • ] 04:24, 27 December 2024 (UTC) | |||
*'''Option 2''' Just because someone may be entitled to get the bit back doesn't mean they necessarily should. Look at ]. I did not resign under a cloud, so I could have gotten the bit back by request. However, the RFA established that I did not have the community support at that point, so it was a good thing that I chose that path. I don't particularly support option 3, but I could deal with it. --] 16:05, 27 December 2024 (UTC) | |||
*'''Option 1''' Asking hundreds of people to vet a candidate who has already passed a RfA and is eligible to get the tools back at BN is a waste of the community's time. -- ] (]) 16:21, 27 December 2024 (UTC) | |||
*'''Option 2''' Abolishing RFA in favour of BN may need to be considered, but I am unconvinced by arguments about RFA being a waste of time. ] ] 19:21, 27 December 2024 (UTC) | |||
*'''Option 2''' I really don't think there's a problem that needs to be fixed here. I am grateful at least a couple administrators have asked for the support of the community recently. ] ''<span style="font-size:small; vertical-align:top;">]</span>''·''<span style="font-size:small; vertical-align:bottom;">]</span>'' 00:12, 29 December 2024 (UTC) | |||
*'''Option 2'''. Keep the status quo of {{tq|any editor is free to re-request the tools through the requests for adminship process}}. Voluntary RfA are rare enough not to be a problem, it's not as though we are overburdened with RfAs. And it’s my time to waste. --] (]) 17:58, 7 January 2025 (UTC) | |||
* '''Option 2 or Option 3'''. These are unlikely to happen anyway, it's not like they're going to become a trend. I'm already wasting my time here instead of other more important activities anyway, so what's a little more time spent giving an easy support?{{pb | |||
}}<span style="border-radius:9em;padding:0 7px;background:#000000">] ]</span> 16:39, 10 January 2025 (UTC) | |||
*'''Option 1''' Agree with Daniel Quinlan that for the problematic editors eligible for re-sysop at BN despite unpopularity, we should rely on our new process of admin recall, rather than pre-emptive RRFAs. I'll add the novel argument that when goliaths like Hog Farm unnecessarily showcase their achievements at RFA, it scares off nonetheless qualified candidates. ] ( ] ) 17:39, 14 January 2025 (UTC) | |||
:'''Option 2''' per Gnoming /CaptainEeek ] (]) 20:04, 14 January 2025 (UTC) | |||
Often, I come upon articles that have (or consist entirely of) trivial references to an article subject in various media. I think that Misplaced Pages needs a policy on this. I think that they should be aggressively removed, per my essay ]. I welcome community feedback. --] 19:27, 1 August 2007 (UTC) | |||
*'''Option 2''' or '''Option 3''' - if you regard a re-RfA as a waste of your time, just don't waste it by participating; it's not mandatory. ]<sup>]</sup> 12:13, 15 January 2025 (UTC) | |||
===Discussion=== | |||
:Concur. Allowing any trivia or pop-culture refences, even if they are especially notable, just makes it that much harder to keep the truly non-notable ones out, as every game user feels their game is notable! - ] 20:18, 1 August 2007 (UTC) | |||
*{{re|Voorts}} If option 2 gets consensus how would this RfC change the wording {{tqq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, ] (]) 21:43, 15 December 2024 (UTC) | |||
*:That is not actually true. ArbCom can (and has) forbidden some editors from re-requesting the tools through RFA. ] ] 19:21, 27 December 2024 (UTC) | |||
*I've re-opened this per ] on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. ] (]/]) 21:53, 15 December 2024 (UTC) | |||
*:The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — ] ⚓ ] 22:01, 15 December 2024 (UTC) | |||
*:@], can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – ] 22:02, 15 December 2024 (UTC) | |||
*::{{done}} ] (]/]) 22:07, 15 December 2024 (UTC) | |||
*'''Question''': May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. ] (]) 22:35, 15 December 2024 (UTC) | |||
*: The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- ] (]) 23:33, 15 December 2024 (UTC) | |||
*::But no volunteer is obligated to pat such candidates on the back. ] (]) 00:33, 16 December 2024 (UTC) | |||
*::: Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- ] (]) 01:52, 16 December 2024 (UTC) | |||
*::::Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. ] (]) 02:31, 16 December 2024 (UTC) | |||
*:::::Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. ] (]) 09:05, 16 December 2024 (UTC) | |||
*::::::I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. ] (]) 13:02, 16 December 2024 (UTC) | |||
*I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as ], we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. ] (]) 00:32, 16 December 2024 (UTC) | |||
*:Regarding option 3, ]. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. ] (]) 17:44, 17 December 2024 (UTC) | |||
* In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. ] (]) 10:26, 16 December 2024 (UTC) | |||
* What about an option 4 encouraging eligible candidates to go through BN? At the end of the ], add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. ] (]) 12:08, 16 December 2024 (UTC) | |||
*:Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be ''encouraging'' former administrators to demonstrate that they still have the approval of the community. ] (]) 12:16, 16 December 2024 (UTC) | |||
*:I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. ] (]) 15:04, 16 December 2024 (UTC) | |||
*::If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. ] (]) 15:30, 16 December 2024 (UTC) | |||
*:::Also a solid option, the question is whether people will actually do it. ] (]) 22:55, 16 December 2024 (UTC) | |||
*::::The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. ] (]) 23:20, 16 December 2024 (UTC) | |||
* This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal ]. ''']]''' 21:56, 16 December 2024 (UTC) | |||
* This apparent negative feeling evoked at an RFA for a former sysop ''everyone agrees is fully qualified and trusted'' certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA ''without first'' visiting BN. This comes in the wake of Worm That Turned's similar rerun. ] (]) 23:29, 16 December 2024 (UTC) | |||
*:Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. ] (]) 23:32, 16 December 2024 (UTC) | |||
*::No argument from me. I was a big Hog Farm backer way back when he was ''merely'' one of Misplaced Pages's best content contributors. ] (]) 12:10, 17 December 2024 (UTC) | |||
*All these mentions of editor time make me have to mention ] (TLDR: our understanding of how editor time works is dreadfully incomplete). ] <sup>]</sup>] 02:44, 17 December 2024 (UTC) | |||
*:I went looking for @]'s comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P ] <sup>]</sup>] 18:18, 17 December 2024 (UTC) | |||
*::Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. ] (]) 18:31, 17 December 2024 (UTC) | |||
*:::{{ping|CaptainEek}} You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the ''reason'' for the RRFAs. Regards, ] (]) 00:14, 22 December 2024 (UTC) | |||
*::::If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. ] (]) 01:05, 22 December 2024 (UTC) | |||
*::@] I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. ]<sup>TT</sup>(]) 10:28, 23 December 2024 (UTC) | |||
*:::@] my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote, {{tq|Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here}}. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. ] <sup>]</sup>] 16:59, 23 December 2024 (UTC) | |||
*::::I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) ] (]) 05:19, 24 December 2024 (UTC) | |||
*::I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. ] ] 12:04, 23 December 2024 (UTC) | |||
*I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". ] (] • ]) 12:40, 19 December 2024 (UTC) | |||
*Can someone accurately describe for me what the status quo is? I reread this RfC twice now and am having a hard time figuring out what the current state of affairs is, and how the proposed alternatives will change them. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:42, 13 January 2025 (UTC) | |||
*:Option 2 is the status quo. The goal of the RFC is to see if the community wants to prohibit reconfirmation RFAs (option 1). The idea is that reconfirmation RFAs take up a lot more community time than a BN request so are unnecessary. There were 2 reconfirmation RFAs recently after a long dry spell. –] <small>(])</small> 20:49, 13 January 2025 (UTC) | |||
*:The status quo, documented at ], is that admins who resigned without being under controversy can seek readminship through either BN (where it's usually given at the discreetion of an arbitrary bureaucrat according to the section I linked) or RfA (where all normal RfA procedures apply, and you see a bunch of people saying "the candidate's wasting the community's time and could've uncontroversially gotten adminship back at BN instead). ] (]) 12:27, 14 January 2025 (UTC) | |||
{{discussion bottom}} | |||
== Guideline against use of AI images in BLPs and medical articles? == | |||
::I don't care for popular culture or trivia sections. Once in a while, though, there may be a popular movie that contains a good moving image of the subject of the article. On other occasions, the only exposure most of the general public has had to the topic of the article may have been in a popular movie or book, and that depiction might have been wrong. In such cases, I think it might be worth mentioning the popular depiction in the article. --] 22:22, 1 August 2007 (UTC) | |||
I have recently seen AI-generated images be added to illustrate both BLPs (e.g. ], now removed) and medical articles (e.g. ]). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases? | |||
:I have mixed feelings about trivia sections, but I support moving them off to separate articles once they become large enough. I know ] is on a list of arguments to avoid but it does serve a purpose. ] 04:40, 2 August 2007 (UTC) | |||
To clarify, I am not including potentially relevant AI-generated images that only ''happen'' to include a living person (such as in ]), but exclusively those used to illustrate a living person in a ] context. ] (] · ]) 12:11, 30 December 2024 (UTC) | |||
::Perhaps ] will help? ] 13:34, 2 August 2007 (UTC) | |||
:What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - ] (]) 12:17, 30 December 2024 (UTC) | |||
:See ]. I do not personally recommend the strict removal of trivia. ] 14:20, 2 August 2007 (UTC) | |||
::Same with animals, organisms etc. - ] (]) 12:20, 30 December 2024 (UTC) | |||
:I personally am '''strongly against''' using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. ]] 12:28, 30 December 2024 (UTC) | |||
:I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – ] <small>(])</small> 12:38, 30 December 2024 (UTC) | |||
::There hasn't been a full discussion yet, and we have a list of uses at ], but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. ] (] · ]) 12:44, 30 December 2024 (UTC) | |||
:Discussions are going on at ] and somewhat at ]. I recommend workshopping an RfC question (or questions) then starting an RfC. ] (]) 13:03, 30 December 2024 (UTC) | |||
::Oh, didn't catch the previous discussions! I'll take a look at them, thanks! ] (] · ]) 14:45, 30 December 2024 (UTC) | |||
:There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in ]. ] (]) 15:00, 30 December 2024 (UTC) | |||
::While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --] (]) 16:04, 30 December 2024 (UTC) | |||
:::For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. ] (]) 17:45, 30 December 2024 (UTC) | |||
::::The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. ] (] · ]) 20:00, 30 December 2024 (UTC) | |||
::::We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- ] (]) 20:54, 30 December 2024 (UTC) | |||
:::I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. ] (]) 10:15, 31 December 2024 (UTC) | |||
:Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools , such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). ] (]) 18:18, 30 December 2024 (UTC) | |||
::Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. ] (] · ]) 20:04, 30 December 2024 (UTC) | |||
:I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. ] (]) 18:40, 30 December 2024 (UTC) | |||
:For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule. | |||
:I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- ]°] 19:12, 30 December 2024 (UTC) | |||
::I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. ] (] · ]) 20:03, 30 December 2024 (UTC) | |||
:Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as ] (as used in the "medical" article ]) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — ] <sup>]</sup> 19:26, 30 December 2024 (UTC) | |||
:I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. ] (]) 13:33, 31 December 2024 (UTC) | |||
] <sup>]</sup> 00:13, 31 December 2024 (UTC)]] | |||
:I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. ] (]) 20:46, 30 December 2024 (UTC) | |||
::AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. ] (]) 21:46, 30 December 2024 (UTC) | |||
{{multiple image | |||
| image1 = Pope Francis in puffy winter jacket.jpg | |||
| image2 = Illustration of Brigette Lundy Paine by Sandra Mu.png | |||
| footer = ] and ] | |||
| total_width = 300 | |||
}} | |||
::AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. ] (]) 00:05, 31 December 2024 (UTC) | |||
:::I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. ] (] · ]) 00:31, 31 December 2024 (UTC) | |||
::::AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. ] (]) 00:36, 31 December 2024 (UTC) | |||
:::::Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. ] (] · ]) 00:40, 31 December 2024 (UTC) | |||
::::::{{tq|always end up with "no consensus" and no guidelines on use at all, even if most people are against it}} Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. ] (]) 02:28, 31 December 2024 (UTC) | |||
:Of interest perhaps is ] on the use of drawn cartoon images in BLPs. ] (]) 22:38, 30 December 2024 (UTC) | |||
:We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites. | |||
:That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. ] (]) 23:31, 30 December 2024 (UTC) | |||
::] | |||
::Why wouldn't we want "fake Photoshop composites"? A ] can be very useful. I'd be sad if we banned ]. ] (]) 06:40, 31 December 2024 (UTC) | |||
:::Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. ] (]) 20:20, 31 December 2024 (UTC) | |||
::::Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. ] (]) 18:03, 15 January 2025 (UTC) | |||
:::::{{tpq|Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop}} others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. ] (]) 18:45, 15 January 2025 (UTC) | |||
:I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not: | |||
:#Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use. | |||
:#Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated. | |||
:The potential harm I mentioned above is twofold, firstly Misplaced Pages is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been. | |||
:Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. ] (]) 00:52, 31 December 2024 (UTC) | |||
::I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys ''the idea'' of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. ] (]) 04:34, 31 December 2024 (UTC) | |||
::{{tq|A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article.}} That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and ] still does (and should) apply in edge cases.{{pb}}{{tq|The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.}} In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that ''might'' have been AI-generated.{{pb}}{{tq|Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware.}} In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". ] (] · ]) 11:13, 31 December 2024 (UTC) | |||
:::Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored ''every'' time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image ''is'' the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images. | |||
:::{{tpq|AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.}} The key words here are "supposed to be" and "shouldn't", editors absolutely ''will'' speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that. | |||
:::Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. ] (]) 11:43, 31 December 2024 (UTC) | |||
::::For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.{{pb}}Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. ] (] · ]) 11:49, 31 December 2024 (UTC) | |||
:::::{{tpq|the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image)}}. There are only two possible scenarios regarding verifiability: | |||
:::::#The image is an accurate representation and we can verify that (e.g. by reference to non-free photos). | |||
:::::#*Verifiability is no barrier to using the image, whether it is AI generated or not. | |||
:::::#*If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not. | |||
:::::#The image is either ''not'' an accurate representation, or we cannot verify whether it is or is not an accurate representation | |||
:::::#*The only reasons we should ever use the image are: | |||
:::::#**It has been the subject of notable commentary and we are presenting it in that context. | |||
:::::#**The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo) | |||
:::::#:This is already policy, whether the image is AI generated or not is completely irrelevant. | |||
:::::You will note that in no circumstance is it relevant whether the image is AI generated or not. ] (]) 13:27, 31 December 2024 (UTC) | |||
::::::In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.{{pb}}In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. ] (] · ]) 13:52, 31 December 2024 (UTC) | |||
:::::::If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image ''is'' misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. ] (]) 15:04, 31 December 2024 (UTC) | |||
::::{{tpq|AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.}} | |||
::::I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. ] (]) 20:35, 31 December 2024 (UTC) | |||
:::::Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. ] (] · ]) 20:37, 31 December 2024 (UTC) | |||
::::::I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. ] (]) 20:45, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated images on Misplaced Pages'''. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also ] scraped from who knows what and where. '''Use only reliable material from reliable sources'''. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. ] (]) 03:12, 31 December 2024 (UTC) | |||
*:'''Reply''', the section of ] concerning images is ] which states "Original images created by a Wikimedian are not considered original research, ''so long as they do not illustrate or introduce unpublished ideas or arguments''". Using AI to generate an image only violates ] if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. ] (]) 04:34, 31 December 2024 (UTC) | |||
*::Prompt generated images are unquestionably violation of ] and ]: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an ]. Get real. ] (]) 23:35, 1 January 2025 (UTC) | |||
*:::"Unquestionably"? Let me question that, @]. <code>;-)</code> | |||
*:::If an editor were to use an AI-based image-generating service and the prompt is something like this: | |||
*:::"I want a stacked bar chart that shows the number of games won and lost by ] each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is: | |||
*:::* 2014–15: played 34 games, won 25, tied 4, lost 5 | |||
*:::* 2015–16: played 34 games, won 28, tied 4, lost 2 | |||
*:::* 2016–17: played 34 games, won 25, tied 7, lost 2 | |||
*:::* 2017–18: played 34 games, won 27, tied 3, lost 4 | |||
*:::* 2018–19: played 34 games, won 24, tied 6, lost 4 | |||
*:::* 2019–20: played 34 games, won 26, tied 4, lost 4 | |||
*:::* 2020–21: played 34 games, won 24, tied 6, lost 4 | |||
*:::* 2021–22: played 34 games, won 24, tied 5, lost 5 | |||
*:::* 2022–23: played 34 games, won 21, tied 8, lost 5 | |||
*:::* 2023–24: played 34 games, won 23, tied 3, lost 8" | |||
*:::I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that? | |||
*:::We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. ] (]) 01:58, 2 January 2025 (UTC) | |||
*::::Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of ] & ]. As for the charts and graphs, there are any number of ways to produce these. ] (]) 03:07, 2 January 2025 (UTC) | |||
*:::::{{tpq|We're discussing generating images of people, places, and objects here}} The proposal contains no such limitation. {{tpq| and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH.}} Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. ] (]) 03:14, 2 January 2025 (UTC) | |||
*::::::As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure ] to produce these fake images and they're a straightforward product of synthesis of multiple sources (]) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. ] (]) 03:23, 2 January 2025 (UTC) | |||
*:::::::So you think the lead image at ] is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed. | |||
*:::::::A lot of my concern about blanket statements is the principle that what's ] is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too. | |||
*:::::::<small>(Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.)</small> ] (]) 06:47, 2 January 2025 (UTC) | |||
*:::::::Review ] and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. ] (]) 09:33, 2 January 2025 (UTC) | |||
*::::::::Please scroll down below SYNTH to the next section titled "What is not original research" which begins with ], our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original ''depiction'' of something, so long as the ''idea'' of that thing is not original. ] (]) 09:55, 2 January 2025 (UTC) | |||
*:::::::::Yes, which explicitly states: | |||
*::::::::::It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light. | |||
*:::::::::Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under ]: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. ] (]) 10:07, 2 January 2025 (UTC) | |||
*:The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:00, 31 December 2024 (UTC) | |||
*:] | |||
*:@], here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate? | |||
*:I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get ], I'm not actually going to worry about it. ] (]) 06:57, 31 December 2024 (UTC) | |||
*::As you know, Misplaced Pages has the unique factor of being entirely volunteer-ran. Misplaced Pages has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Misplaced Pages editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future. | |||
*::In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors. | |||
*::Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI. | |||
*::As a long-time editor who has frequently stumbled upon intense ] content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Misplaced Pages readers and Misplaced Pages editors alike. | |||
*::Misplaced Pages is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines. | |||
*::A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: '''we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage'''. ] (]) 07:40, 31 December 2024 (UTC) | |||
*:::A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A[REDACTED] editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. ] (]) 08:26, 31 December 2024 (UTC) | |||
*::::I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'. | |||
*::::As a translator myself, I can only say: ''Oh please''. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ''ever'' beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, ''human'' translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Misplaced Pages itself). | |||
*::::I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the ''reality'' is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Misplaced Pages. | |||
*::::Either you, a human being, can contribute to the project or ''you can't''. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Misplaced Pages in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project. | |||
*::::If people can't be confident that Misplaced Pages is ''made by humans, for humans'' the project is finally on its way out.] (]) 09:55, 31 December 2024 (UTC) | |||
*:::::I don't know how up to date you are on the current state of translation, but: | |||
*::::::'''' | |||
*::::::''Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.'' | |||
*::::::''88% of respondents use at least one CAT tool for at least some of their translation tasks.'' | |||
*::::::''Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.'' | |||
*:::::Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. ] (]) 17:26, 31 December 2024 (UTC) | |||
*::::::You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is ''absolutely horrible'' at translation and ''all of it must be thoroughly checked by humans'', as you would be if you were a translator yourself. ] (]) 22:20, 31 December 2024 (UTC) | |||
*:::::"''all machine translated material must be thoroughly checked and modified by, yes, ''human'' translators''" | |||
*:::::You are just agreeing with me here. | |||
*::::::'''' -American Translation Society | |||
*:::::There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. ] (]) 06:48, 1 January 2025 (UTC) | |||
*::::::And any translator who wants to use generative AI to ''attempt'' to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. ] (]) 11:09, 1 January 2025 (UTC) | |||
*:::::::I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Misplaced Pages article?" The question here is ''not'' "Shall we put AI-generating buttons on Misplaced Pages's own website?" ] (]) 02:27, 2 January 2025 (UTC) | |||
*:::::::I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. ] (]) 03:20, 2 January 2025 (UTC) | |||
*::::::::Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is ''not'' "nonsense"? | |||
*::::::::I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that ] will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...). | |||
*::::::::But I'm not worried about a Misplaced Pages editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of ], feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. ] (]) 07:09, 2 January 2025 (UTC) | |||
*::::::Translators are not using ''generative'' AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any ''generative'' faculties to its output since that is the exact opposite of what translation is supposed to do. ] (]) 02:57, 2 January 2025 (UTC) | |||
*:::::::{{tpq|Translators are not using generative AI for translation}} this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. ] (]) 03:06, 2 January 2025 (UTC) | |||
*:::::::Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. ] (]) 03:20, 2 January 2025 (UTC) | |||
* '''Ban AI-generated from all articles, AI anything from BLP and medical articles''' is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 06:53, 31 December 2024 (UTC) | |||
*:@], please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? ] (]) 07:00, 31 December 2024 (UTC) | |||
*::I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:02, 31 December 2024 (UTC) | |||
*:::A quick web search indicates that there are generative AI programs that create SVG files. ] (]) 07:16, 31 December 2024 (UTC) | |||
*::::Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:18, 31 December 2024 (UTC) | |||
*:::::Like everyone said, there should be a ''de facto'' ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--'''''] <sup>]</sup>''''' 08:20, 31 December 2024 (UTC) | |||
*::::::Except, not everybody ''has'' said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. ] (]) 10:24, 31 December 2024 (UTC) | |||
*:::::@], exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{tl|pd-algorithm}} instead of {{tl|cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? ] (]) 02:33, 2 January 2025 (UTC) | |||
*::::::The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 04:43, 2 January 2025 (UTC) | |||
*:::::::How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? ] (]) 07:13, 2 January 2025 (UTC) | |||
*::::There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in ] (from ]) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid? | |||
*::::I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —] (]) 01:15, 1 January 2025 (UTC) | |||
*:::::I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of ]) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. ] (]) 07:35, 2 January 2025 (UTC) | |||
*I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, ] or ].—] <small>]/]</small> 11:21, 31 December 2024 (UTC) | |||
*:Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in ]), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate ] himself, which is what my proposal would recommend against. ] (] · ]) 11:32, 31 December 2024 (UTC) | |||
*::That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—] <small>]/]</small> 11:34, 31 December 2024 (UTC) | |||
*:::Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. ] (] · ]) 11:43, 31 December 2024 (UTC) | |||
*Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. ] (]) 15:12, 31 December 2024 (UTC) | |||
* '''Support total ban of AI imagery''' - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Misplaced Pages will be increasingly respected for holding a hard line against synthetic imagery. ] (]) 15:39, 31 December 2024 (UTC) | |||
*:For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. ] (]) 16:34, 31 December 2024 (UTC) | |||
*'''Yes''', we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture ''does not depict the real person'' because it is quite simply fake. | |||
*Even worse would be using AI to develop medical images in articles ''in any way''. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. '']'' 🎄 ] — ] 🎄 20:08, 31 December 2024 (UTC) | |||
*:It's ''incredibly'' disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. ] (]) 20:31, 31 December 2024 (UTC) | |||
*::So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis. | |||
*::I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). '']'' 🎄 ] — ] 🎄 21:02, 31 December 2024 (UTC) '']'' 🎄 ] — ] 🎄 20:56, 31 December 2024 (UTC) | |||
*:::Determining what benefits ''any'' image brings to Misplaced Pages can ''only'' be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot. | |||
*:::The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things ''any'' image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. ] (]) 21:43, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated text or images in articles''', except in contexts where the AI-generated content is itself the subject of discussion (in a ] or ]). Generative AI is fundamentally at odds with Misplaced Pages's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. <span class="nowrap">—] (] | ])</span> 21:34, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated images''' except in ABOUTSELF contexts. This is ''especially'' a problem given the preeminence Google gives to Misplaced Pages images in its image search. ] (]) 22:49, 31 December 2024 (UTC) | |||
*'''Ban across the board''', except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. ] <small><sup>]</sup></small> 00:29, 1 January 2025 (UTC) | |||
*'''Oppose blanket bans''' that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —] (]) 01:27, 1 January 2025 (UTC) | |||
]?]] | |||
*'''Oppose blanket bans''' AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now ''(right)''. This purports to be a particular person ("]") but, if you check the , you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. ]🐉(]) 08:03, 1 January 2025 (UTC) | |||
*:So, you expect an the AI, ''notoriously trained on Misplaced Pages (and whatever else is floating around on the internet)'', to correct Misplaced Pages where humans have failed... using the data it ''scraped from Misplaced Pages (and who knows where else)''? ] (]) 11:12, 1 January 2025 (UTC) | |||
*::I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible. | |||
{{cot|The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology}} | |||
To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps: | |||
#Gathering information on the Opie portrait: This included details about its history, provenance, and any available information on its cost. | |||
#Reviewing scholarly articles and publications: This step focused on finding academic discussions specifically addressing the attribution of the portrait to John Opie. | |||
#Collecting expert opinions: Statements and opinions from art experts and historians were gathered to understand the range of perspectives on the certainty of the attribution. | |||
#Examining historical documents and records: This involved searching for any records that could shed light on the portrait's origins and authenticity, such as Macquarie's personal journals or contemporary accounts. | |||
#Exploring scientific and technical analyses: Information was sought on any scientific or technical analyses conducted on the portrait, such as pigment analysis or canvas dating, to determine its authenticity. | |||
#Comparing the portrait to other Opie works: This step involved analyzing the style and technique of the Opie portrait in comparison to other known portraits by Opie to identify similarities and differences. | |||
{{cob}} | |||
*::It was quite transparent in listing and citing the sources that it used for its analysis. These included the Misplaced Pages image but if one didn't want that included, it would be easy to exclude it. | |||
*::So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Misplaced Pages. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist. | |||
*::]🐉(]) 09:09, 2 January 2025 (UTC) | |||
*:::They don't ''have to be black boxes'' but they are ''by design'': they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Misplaced Pages is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). ] (]) 09:39, 2 January 2025 (UTC) | |||
*:::While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. ] (] · ]) 17:40, 2 January 2025 (UTC) | |||
*:::: Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. ]🐉(]) 17:28, 4 January 2025 (UTC) | |||
* '''Oppose blanket ban''': best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what ''exactly'' would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. ] (]) 12:52, 1 January 2025 (UTC) | |||
*:I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on ''AI-generated'' images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. ] (] · ]) 12:58, 1 January 2025 (UTC) | |||
*::That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. ] (]) 13:15, 1 January 2025 (UTC) | |||
*:::I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being ''generated'' by AI (like the Laurence Boccolini example below) and an image being ''altered'' or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. ] (] · ]) 15:24, 1 January 2025 (UTC) | |||
*'''Oppose as unenforceable.''' I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI ] to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Misplaced Pages. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. ] (]) 15:41, 1 January 2025 (UTC) | |||
*:And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. ] (]) 17:39, 1 January 2025 (UTC) | |||
*:Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. ] (]) 17:58, 1 January 2025 (UTC) | |||
*::As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say {{tq|if if changes the image}}), while I am talking about creating an image ''ex nihilo'', which is what "generating" means. ] (] · ]) 18:05, 1 January 2025 (UTC) | |||
*:::I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. ] (]) 18:16, 1 January 2025 (UTC) | |||
*:Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. ] (]) 22:51, 3 January 2025 (UTC) | |||
*'''Support blanket ban''' because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output ''that has already been generated'' might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? ] (]) 23:30, 1 January 2025 (UTC) | |||
*'''Support blanket ban''' - Primarily because of the "poisoning the well"/"dead internet" issues created by it. ] (]) 14:30, 2 January 2025 (UTC) | |||
* '''Support a blanket ban''' to assure some control over AI-creep in Misplaced Pages. And per discussion. ] (]) 10:50, 3 January 2025 (UTC) | |||
* '''Support that ] applies to images''': images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on ] and ] by using images instead of text? ] (]) 17:04, 3 January 2025 (UTC) | |||
*:As an aside on this: in some cases Commons is being treated as a way of side-stepping ] and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. ] (]) 10:43, 4 January 2025 (UTC) | |||
*'''Support temporary blanket ban''' with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in ] and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." ] (]) 23:01, 3 January 2025 (UTC) | |||
* First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Misplaced Pages, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Misplaced Pages editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. ] (]) 17:59, 4 January 2025 (UTC) | |||
*'''Support Blanket Ban on AI generated imagery''' per most of the discussion above. It's a very slippery slope. I ''might'' consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -] (]) 02:45, 5 January 2025 (UTC) | |||
* '''Oppose blanket ban''' It is far too early to take an absolutist position, particularly when the potential is enormous. Misplaced Pages is already is image desert and to reject something that is only at the cusp of development is unwise. '''<span style="text-shadow:7px 7px 8px black; font-family:Papyrus">]<sup>]</sup></span>''' 20:11, 5 January 2025 (UTC) | |||
*'''Support blanket ban''' on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. ] (]) 22:44, 5 January 2025 (UTC) | |||
*'''Support blanket ban''' as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). ] (]) 15:32, 8 January 2025 (UTC) | |||
] | |||
*'''Support indefinite blanket ban except ABOUTSELF and simple abstract examples''' (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of ]. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Misplaced Pages should act to limit its exposure to this kind of technology as far as is feasible. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 20:57, 9 January 2025 (UTC) | |||
*'''Support at least some sort of recomendation against''' the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (], ], ], etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see ]) fail ] (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay ], and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view {{tq|have no legitimate encyclopedic function whatsoever}}. ] ☞️ ] 14:36, 14 January 2025 (UTC) | |||
*:Anything that fails WP:IMAGERELEVANCE can be, should be, and ''is'', excluded from use already, likewise any images which {{tpq|have no legitimate encyclopedic function whatsoever.}} This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use ''is'' relevant. ] (]) 14:45, 14 January 2025 (UTC) | |||
*::That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. ] (]) 23:24, 14 January 2025 (UTC) | |||
*:::Except that is both not true and irrelevant. ''Some'' AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. ] (]) 13:43, 15 January 2025 (UTC) | |||
*::::Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea? | |||
*::::"Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? ] (]) 17:50, 15 January 2025 (UTC) | |||
*:::::Criteria (b) and (c) were not part of the statement I was responding to, and make it a ''very'' significantly different assertion. I will ] that you are not making ] arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome. | |||
*:::::Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. ] (]) 18:56, 15 January 2025 (UTC) | |||
{{clear}} | |||
::I say slash them and burn them. With a very few exceptions, these lists are just useless and unencyclopedic cruft-magnets. Anything worthwhile can be included in the main body of the article. --] 18:09, 2 August 2007 (UTC) | |||
===BLPs=== | |||
{{Archive top | |||
|status = Consensus against | |||
|result = There is clear consensus against using AI-generated imagery to depict BLP subjects. Marginal cases (such as major AI enhancement or where an AI-generated image of a living person is itself notable) can be worked out on a case-by-case basis. I will add a sentence reflecting this consensus to the ] and the ]. —] (]) 14:02, 8 January 2025 (UTC) | |||
}} | |||
Are AI-generated images (generated via text prompts, see also: ]) okay to use to depict BLP subjects? The ] example was mentioned in the opening paragraph. The image was created using ], {{tq|a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.}} ]]] ] (]) 12:34, 31 December 2024 (UTC) | |||
]]] | |||
:::Well said! I've gotten very tired of cruft-wrangling, and feel it's time for pop-culture sections to go. - ] 18:17, 2 August 2007 (UTC) | |||
]: <ins>Note</ins>: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the ] example, which was generated using ], another text-to-image model). | |||
] (]) 11:10, 3 January 2025 (UTC) {{clear}} | |||
{{small|notified: ], ], ], ] -- ] (]) 11:27, 2 January 2025 (UTC)}} | |||
'''All interested parties''': Please see ]. --] 21:54, 2 August 2007 (UTC) | |||
*'''No.''' I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) ] (] · ]) 12:46, 31 December 2024 (UTC) | |||
*:That AI generated image looks like ] wearing a Laurence Boccolini suit. ] (]) 12:50, 31 December 2024 (UTC) | |||
*:There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them ''unless'' they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. ] (]) 16:45, 31 December 2024 (UTC) | |||
*'''No'''. Well, that was easy.{{pb}}<!--converted from 2 lines ~ToBeFree-->They are fake images; they do not actually depict the person. They depict an AI-generated ''simulation'' of a person that may be inaccurate. '']'' 🎄 ] — ] 🎄 20:00, 31 December 2024 (UTC) | |||
*:Even if the subject uses the image to identify themselves, the image is still fake. '']'' (] — ]) 19:17, 2 January 2025 (UTC) | |||
*'''No''', with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. ] (]) 20:37, 31 December 2024 (UTC) | |||
*'''No'''. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. <span class="nowrap">—] (] | ])</span> 21:30, 31 December 2024 (UTC) | |||
*'''No''' except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -] (]) 21:32, 31 December 2024 (UTC) | |||
*'''Yes''', when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use ''any'' image to represent a BLP subject this is already policy. ] (]) 21:46, 31 December 2024 (UTC) | |||
*:How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real ] has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression. | |||
*:How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. '']'' 🎄 ] — ] 🎄 21:54, 31 December 2024 (UTC) | |||
*::{{tpq|How well can we determine how accurate a representation it is?}} in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation ''any'' image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. ] (]) 23:54, 31 December 2024 (UTC) | |||
*:::I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. '']'' 🎄 ] — ] 🎄 00:14, 1 January 2025 (UTC) | |||
*::::I'm guessing your filter bubble doesn't include ] and their notorious ] problems. ] (]) 02:46, 2 January 2025 (UTC) | |||
*:::A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a , 87% of respondents want AI-generated art to ''at least'' be transparent, and 98% consider authentic images "pivotal in establishing trust". {{pb}}And even if you disagree with all that, can you not see the larger problem of AI images on Misplaced Pages getting propagated into generative AI corpora? ] (]) 04:20, 2 January 2025 (UTC) | |||
*::::I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so. | |||
*::::I think we're ], not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. ] (]) 07:40, 2 January 2025 (UTC) | |||
*'''Absolutely no fake/AI images of people, photorealistic or otherwise'''. How is this even a question? These images are fake. Readers need to be able to trust Misplaced Pages, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. ] (]) 22:24, 31 December 2024 (UTC) | |||
*'''No''' except for edge cases (mostly, if the image itself is notable enough to go into the article). ] (]) 22:31, 31 December 2024 (UTC) | |||
*'''Absolutely not''', except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. ] (]) 23:06, 31 December 2024 (UTC) | |||
* '''No''' with no exceptions. ] (]) 23:54, 31 December 2024 (UTC) | |||
*'''No'''. We don't permit falsifications in BLPs. ] <small><sup>]</sup></small> 00:30, 1 January 2025 (UTC) | |||
*:For the requested clarification by {{u|Some1}}, no AI-generated images (except when the image ''itself'' is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. ''Actual photographs'' of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is ''not'' an image of the person. ] <small><sup>]</sup></small> 05:42, 3 January 2025 (UTC) | |||
*'''No, but with exceptions'''. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —] (]) 01:27, 1 January 2025 (UTC) | |||
*:Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —] (]) 05:41, 3 January 2025 (UTC) | |||
*'''No''', and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than ''Frankenstein'' images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. ] (]) 01:34, 1 January 2025 (UTC) | |||
*:Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. ]) or excluding those that are not misleading or inaccurate. AI images are no different. ] (]) 02:57, 1 January 2025 (UTC) | |||
*::Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – ] was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. ] (]) 07:44, 2 January 2025 (UTC) | |||
*'''Yes''', so long as it is an accurate representation. ] ] 03:40, 1 January 2025 (UTC) | |||
*'''No''' not for BLPs. ] (]) 04:15, 1 January 2025 (UTC) | |||
*'''No''' Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --] (]) 07:10, 1 January 2025 (UTC) | |||
*:Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.<br style="margin-bottom:0.5em"/>What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint.<span id="Masem:1735741774879:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:29, 1 January 2025 (UTC)</span> | |||
*'''No''', I'm in agreeance with ] here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. ] (]) 09:32, 1 January 2025 (UTC) | |||
*:So you just said a portrait can be used because[REDACTED] tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. ] (]) 10:07, 2 January 2025 (UTC) | |||
*::To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person." | |||
*::However, I really want to stick to what you say at the end there: {{tq|Heck, most AI looks closer to the real thing than any portrait.}} | |||
*::That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.{{br|2}} | |||
*::Per the wording of the RfC of "{{tq|depict BLP subjects}}," I don't think there would be any valid case to utilize AI images. I hold a strong No. ] (]) 04:15, 3 January 2025 (UTC) | |||
*'''No.''' We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. ] (]) 19:33, 1 January 2025 (UTC) ]?]] | |||
*'''Maybe''' There was a prominent BLP image which we displayed on the ]. ''(right)'' This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the ] composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. ]🐉(]) 08:30, 1 January 2025 (UTC) | |||
*:Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. '']'' (] — ]) 14:18, 1 January 2025 (UTC) | |||
*::Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." ] (]) 10:12, 2 January 2025 (UTC) | |||
*:::Commons descriptions do not appear on our articles. ] (]) 10:28, 2 January 2025 (UTC) | |||
*:::People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. '']'' (] — ]) 14:15, 2 January 2025 (UTC) | |||
*::Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing.<span id="Masem:1735742005673:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:33, 1 January 2025 (UTC)</span> | |||
*:::Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see ] for some examples. '']'' (] — ]) 14:37, 1 January 2025 (UTC) | |||
*::::Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —] (]) 20:06, 1 January 2025 (UTC) | |||
*:::::Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis) | |||
*:::::{{tqq|...human is not going to change or distort a person's appearance in the same way an AI image would. done by a person who is paying attention to what they are doing by '''person who is aware, while they are making , that they might be distorting the image and is, I only assume, trying to minimise it''' – those careful modifications shouldn't be equated with something made up by an AI image generator.}} '']'' (] — ]) 20:56, 1 January 2025 (UTC) | |||
*::::::@] then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? ] (]) 22:12, 1 January 2025 (UTC) | |||
*:::::::I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above: {{tqq|The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person.}} '']'' (] — ]) 00:16, 2 January 2025 (UTC) | |||
*::::::::Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. ] (]) 01:17, 2 January 2025 (UTC) | |||
*:::::::::I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. '']'' (] — ]) 02:30, 2 January 2025 (UTC) | |||
*::::::::::To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. '']'' (] — ]) 02:38, 2 January 2025 (UTC) | |||
*::::::::::Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? ] (]) 02:58, 2 January 2025 (UTC) | |||
*:::::::::::Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute. | |||
*:::::::::::I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. '']'' (] — ]) 15:30, 2 January 2025 (UTC) | |||
*::::::::::Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a ] exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, ''known'' paths. ] (]) 04:44, 2 January 2025 (UTC) | |||
*:::::::::::Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. ] (]) 04:48, 2 January 2025 (UTC) | |||
*::::::::::::If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is ''and'' confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. ] (]) 05:40, 2 January 2025 (UTC) | |||
*:::::::::::::If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". ] (]) 07:47, 2 January 2025 (UTC) | |||
*::::::::::::::The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. ] (]) 07:56, 2 January 2025 (UTC) | |||
*:::::::::::::::Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. ] (]) 07:58, 2 January 2025 (UTC) | |||
*:{{outdent|14}} And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.{{pb}}And I don't want to count 100 dots either! ] (]) 17:43, 2 January 2025 (UTC) | |||
*::Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. ] (]) 07:44, 3 January 2025 (UTC) | |||
* '''Comment''': when you Google search someone (at least from the Chrome browser), often the link to the Misplaced Pages article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). ] (]) 09:39, 1 January 2025 (UTC) | |||
*:This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. ] (]) 11:39, 1 January 2025 (UTC) | |||
* '''Already opposed a blanket ban''': It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I ] there. ] (]) | |||
*:Some editors might oppose a blanket ban on ''all'' AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/]) to depict ]. ] (]) 14:32, 1 January 2025 (UTC) | |||
*'''No''' For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --] (]) 14:35, 1 January 2025 (UTC) | |||
*I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we ''prefer'' photos over paintings (if available). So… we should prefer photos over AI imagery. {{pb}}<!--list syntax fixed ~ToBeFree--> That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image ''is'' the ''only'' option (ie there is no photo available), then the caption should ''clearly'' indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. ] (]) 14:56, 1 January 2025 (UTC) | |||
*:The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. ] (] · ]) 15:27, 1 January 2025 (UTC) | |||
*::We're here to build an encyclopedia, not to protect commercial search engine companies. | |||
*::I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have ''inaccurate'' AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image ''looks like'' whatever's being depicted. We are not ''necessarily'' warranting that the image was created through a specific process, but the image really does need to look like the subject. ] (]) 03:12, 2 January 2025 (UTC) | |||
*:::You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. ] (]) 17:38, 3 January 2025 (UTC) | |||
*:::As another editor pointed out in their comment, there's the {{blue|ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet}}, especially on a site such as Misplaced Pages and especially on their own biography. ] says the bios {{tq|must be written conservatively and with regard for the subject's privacy.}} ] (]) 18:37, 3 January 2025 (UTC) | |||
*:::{{tqq| Once we can no longer tell the difference, what's the point in banning them?}} Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should ]. '']'' (] — ]) 18:47, 3 January 2025 (UTC) | |||
*:If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. ] (]) 04:48, 2 January 2025 (UTC) | |||
*::Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. ] (]) 04:52, 2 January 2025 (UTC) | |||
*:::But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to from Getty's images. ] (]) 05:50, 2 January 2025 (UTC) | |||
*::::Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. ] (]) 07:48, 2 January 2025 (UTC) | |||
*<s>'''Oppose.'''</s> '''Yes.''' I echo ]: {{Tq2|What this conversation is really circling around is banning entire skillsets from contributing to Misplaced Pages merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Misplaced Pages. Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.<br/>Additionally, referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Misplaced Pages is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.}} ] (]) 15:41, 1 January 2025 (UTC) | |||
*:Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. ] (]) 15:59, 1 January 2025 (UTC) | |||
*::By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using {{blue|AI-generated images (generated via text prompts, see also: ])}} to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. ] (]) 16:09, 1 January 2025 (UTC) | |||
*:::I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images ''will'' be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. ] (]) 16:29, 1 January 2025 (UTC) | |||
*::Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear ] and outright ]. There's no two ways about it. Articles do ''not'' require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. ] (]) 23:39, 1 January 2025 (UTC) | |||
*:::I really encourage you to read the discussion I linked before because it is ]. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles ''require''. It is about ''improvements'' to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. ] (]) 03:21, 2 January 2025 (UTC) | |||
*::::Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of ]: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes. | |||
*::::A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that. | |||
*::::Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the ''fake sources'' LLMs also love to "hallucinate"? ] (]) 03:37, 2 January 2025 (UTC) | |||
*:::::So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion: {{Tq|Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.}}. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Misplaced Pages. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review ] because SYNTH is not a policy; NOR is the policy: {{tq|If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH.}} Additionally, ]. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. ] (]) 08:08, 2 January 2025 (UTC) | |||
*::::::"training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a ''human being''. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as ] as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. ] (]) 09:44, 2 January 2025 (UTC) | |||
*:::::::NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not ''about the content'' for you, NOR and SYNTH are irrelevant to your argument, which boils down to ] and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. ] (]) 09:52, 2 January 2025 (UTC) | |||
*::::::::This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. ] (]) 09:59, 2 January 2025 (UTC) | |||
*'''Maybe''': there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. ] (]) 18:14, 1 January 2025 (UTC) | |||
*:That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (]), now they'll be tasked with dealing with AI-generated ones in BLP articles. ] (]) 20:28, 1 January 2025 (UTC) | |||
*::It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. ] (]) 22:14, 1 January 2025 (UTC) | |||
*::That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and ] states that new policies should address current problems rather than hypothetical concerns. ] (]) 22:16, 1 January 2025 (UTC) | |||
*Easy '''no''' for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. ] </nowiki></span>''']] 19:05, 1 January 2025 (UTC) | |||
*'''No''' obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. ] (]) 22:19, 1 January 2025 (UTC) | |||
*'''No''' to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. ] (]) 23:36, 1 January 2025 (UTC) | |||
*:While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. ] (]) 01:16, 2 January 2025 (UTC) | |||
*::The thing that amplifies the problem is necessarily a problem. ] (]) 02:57, 2 January 2025 (UTC) | |||
*:::That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. ] (]) 03:04, 2 January 2025 (UTC) | |||
*'''No''' for all people, per Chaotic Enby. ] (]) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. ] (]) 04:00, 3 January 2025 (UTC) | |||
*'''No''' - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios ({{tq|"Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant"}} is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is). | |||
*If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. ] (]) 14:39, 2 January 2025 (UTC) ] (]) 14:39, 2 January 2025 (UTC) | |||
*:{{tpq|we should be steering clear of copyvio}} we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to ''this'' discussion. | |||
*:{{tpq|if people upload faked images the response should be as it is now}} in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. ] (]) 15:14, 2 January 2025 (UTC) | |||
*::The idea that {{tq|current policies are entirely adequate}} is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". ] (]) 18:36, 2 January 2025 (UTC) | |||
*:::I rely on one of those up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ] (]) 18:41, 2 January 2025 (UTC) | |||
*::::"{{tq|in other words you are saying that the problem is faked images not AI}}" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt. | |||
*::::"{{tq|at least some AI images are legally acceptable for us}}" - Until they decide which ones that isn't much help. ] (]) 19:05, 2 January 2025 (UTC) | |||
*:::::Yes – what FOARP said. AI-generated images are fakes and are misleading. '']'' (] — ]) 19:15, 2 January 2025 (UTC) | |||
*:::Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. ] (]) 19:05, 2 January 2025 (UTC) | |||
*'''No!''' This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. ] <small>(]) | :) | he/him | </small> 15:00, 2 January 2025 (UTC) | |||
*:Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. ] <small>(]) | :) | he/him | </small> 15:40, 3 January 2025 (UTC) | |||
*'''No''', unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Misplaced Pages, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ] (] • ]) 15:25, 2 January 2025 (UTC) | |||
*:To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? ] (]) 15:54, 2 January 2025 (UTC) | |||
*::If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like ]. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Misplaced Pages. ] (] • ]) 19:13, 2 January 2025 (UTC) | |||
* '''No''', with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of ]... - ] (]) 18:02, 2 January 2025 (UTC | |||
*'''Maybe''' I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Misplaced Pages. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask ] to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Misplaced Pages against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- ]°] 18:17, 2 January 2025 (UTC) | |||
*'''No''' This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. ] (]) 18:31, 2 January 2025 (UTC) | |||
*'''No'''. LLMs don't generate answers, they generate ''things that look like'' answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate ''things that look like'' photos. Using them on BLPs is unacceptable. ] (]) 19:30, 2 January 2025 (UTC) | |||
*'''No'''. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). ] (]) 19:56, 2 January 2025 (UTC) | |||
*'''No.''' Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo <ins>(or drawing)</ins> of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Misplaced Pages link, without the disclaimer. ] (]) 23:54, 2 January 2025 (UTC) | |||
* I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ] (]) 00:48, 3 January 2025 (UTC) | |||
*::A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not. | |||
*:] (]) 02:44, 3 January 2025 (UTC) | |||
*::Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. ] (]) 02:55, 3 January 2025 (UTC) | |||
*:::Not to mention, hyper-realism is a style an artist may use in virtually any medium. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. ] (]) 03:45, 3 January 2025 (UTC) | |||
*::::For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, ''faked'') photos of human article subjects are somehow ''a good thing'', I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. ] (]) 03:54, 3 January 2025 (UTC) | |||
*:::::Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, ] means assuming that people you disagree with are not ''trying to hurt Misplaced Pages.'' Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. ] (]) 04:09, 3 January 2025 (UTC) | |||
*::::::Look, I don't know if you've been living under a rock or what for the past few years but the reality is that '' people hate AI images'' and dumping a ton of AI/fake images on Misplaced Pages, a place people go for ''real information'' and often ''trust'', inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is ''especially'' a problem when they're intended to represent ''living people'' (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. ] (]) 04:55, 3 January 2025 (UTC) | |||
*:::::::Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Misplaced Pages when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. ] (]) 06:10, 3 January 2025 (UTC) | |||
*:To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation <small>(unlikely, given my lack of painting skills, but let's not get lost in the metaphor)</small>, but if my painting hasn't been discussed anywhere besides Misplaced Pages, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ] (] • ]) 05:57, 3 January 2025 (UTC) | |||
*::An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically ''not OR''. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. ] (]) 06:18, 3 January 2025 (UTC) | |||
*:::Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…). | |||
*:::These things are fakes. The analysis stops there. ] (]) 10:48, 4 January 2025 (UTC) | |||
*::::Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently ''for years''. In ], they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that ''those decisions were consensus.'' The motivated reasoning of these discussions has been as blatant as that.<br/>At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing ''SOLELY'' on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.<br/>Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been ''verifiability'', not provenance or falsity. Sometimes, IMO, that has lead to disaster and Misplaced Pages saying things I know to be factually untrue despite the contents of reliable sources. But ''that'' is the policy. We compare the contents of Misplaced Pages to reliable sources, and the contents of Misplaced Pages are considered verifiable if they cohere.<br/>I ask again: If Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to ''limit what humans can do on Misplaced Pages'', what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? ] (]) 23:31, 4 January 2025 (UTC) | |||
*:::::{{tq|"Verifiable by comparing them to a reliable source"}} - comparing two images and saying that one ''looks like'' the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing. | |||
*:::::{{tq|"Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake.""}} - Try presenting a paraphrasing as a quotation and see what happens. | |||
*:::::{{tq|"Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..."}} - This basically happened, and is the origin of ]. Misplaced Pages is not a host for original works. ] (]) 22:01, 6 January 2025 (UTC) | |||
*::::::{{tq|Comparing two images and saying that one looks like the other is not "verifying" anything.}} Comparing text to text in a reliable source is literally the same thing. | |||
*::::::{{tq|The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.}} No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow ''more'' unverifiable simply because it is created in a lifelike style. | |||
*::::::{{tq|Try presenting a paraphrasing as a quotation and see what happens.}} Besides what I just said, ''nobody'' is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.) | |||
*::::::{{tq|This basically happened, and is the origin of WP:NOTGALLERY.}} That is not the same thing. User-generated images that illustrate the subject are not prohibited by ]. Misplaced Pages is a host of encyclopedic content, and user-generated images can have encyclopedic content. ] (]) 02:41, 7 January 2025 (UTC) | |||
*:::::::Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. '']'' (] — ]) 02:44, 7 January 2025 (UTC) | |||
*::::::::Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. ] (]) 02:57, 7 January 2025 (UTC) | |||
*:::::::::So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still ''not an image of the person'' regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. ] (]) 22:52, 7 January 2025 (UTC) | |||
*::::::::::{{+1}} to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's ''trying to depict the person''. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. '']'' (] — ]) 23:18, 7 January 2025 (UTC) | |||
*::::::::::You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy. | |||
*::::::::::But to address your actual point: Any image—any ''photo''—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery. | |||
*::::::::::Finally, a human being ''is'' responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—''Is it an appropriate likeness?'' ] (]) 10:20, 8 January 2025 (UTC) | |||
*:::::::::::(Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Misplaced Pages image. Simple as. ] (]) 10:32, 8 January 2025 (UTC) | |||
*:::::We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. ] (]) 23:11, 7 January 2025 (UTC) | |||
* Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are ''not'' photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was ] then ] from his article: ] by ]]] {{pb}} Pinging people who !voted No above: ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ] --- ] (]) 03:55, 3 January 2025 (UTC) {{clear}} | |||
*:Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover. | |||
*:(this isn't even a good example, it looks more like ]) | |||
*:] (]) 04:07, 3 January 2025 (UTC) | |||
*:Was I unclear? ''No'' to all of them. ] (]) 04:13, 3 January 2025 (UTC) | |||
*:Still '''no''', because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. <span class="nowrap">—] (] | ])</span> 04:24, 3 January 2025 (UTC) | |||
*:I still think '''no'''. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we ''do'' end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. ] (]) 04:40, 3 January 2025 (UTC) | |||
*:'''No''' those are not okay, as this will just cause arguments from people saying a picture is ''obviously'' AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. ] (]) 05:27, 3 January 2025 (UTC) | |||
*:'''No''' to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ] (] • ]) 05:44, 3 January 2025 (UTC) | |||
*:Thanks for the ping, yes I can, the answer is no. ] (]) 07:31, 3 January 2025 (UTC) | |||
*:'''No''', and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. ] (]) 09:28, 3 January 2025 (UTC) | |||
*::The RfC question has not been changed; another editor was the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. ] (]) 11:18, 3 January 2025 (UTC) | |||
*:::Also answering '''No''' to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. ] (] · ]) 14:52, 3 January 2025 (UTC) | |||
*::::The RfC question hasn't been changed; see my response to Zaathras below. ] (]) 15:42, 3 January 2025 (UTC) | |||
*:No, that's even a worse possible approach.<span id="Masem:1735910695864:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 13:24, 3 January 2025 (UTC)</span> | |||
*:'''No'''. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the ''subject'', not by machines trying to simulate an image. Besides, the given example is horribly drawn. '']'' (] — ]) 15:03, 3 January 2025 (UTC) | |||
*:I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) ] (]) 16:06, 3 January 2025 (UTC) | |||
*:I said *NO*. ] (]) 10:37, 4 January 2025 (UTC) | |||
*:'''No''' Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --] (]) 01:12, 5 January 2025 (UTC) | |||
*:Still '''no'''. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. ] (]) 20:43, 6 January 2025 (UTC) | |||
*'''Absolutely not'''. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Misplaced Pages is better than this. ] (]) 10:16, 3 January 2025 (UTC) | |||
*'''Comment''' The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. ] (]) 14:33, 3 January 2025 (UTC) | |||
*:The RfC question hasn't been modified; I've only added a clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the ''exact same'' as it is now, so I don't think the addition of the Note makes a whole ton of difference). ] (]) 15:29, 3 January 2025 (UTC) | |||
*'''No''' At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. ]] 21:34, 3 January 2025 (UTC) | |||
*'''Support temporary blanket ban''' with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. ] (]) 23:01, 3 January 2025 (UTC) | |||
*'''No'''. Misplaced Pages is made ''by'' and ''for'' humans. I don't want to become . Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. ] (]) 00:03, 4 January 2025 (UTC) | |||
*'''No'''. Generative AI may have its place, and it may even have a place on Misplaced Pages in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. ] <span style="font-weight:bold">|</span> ] 01:07, 4 January 2025 (UTC) | |||
*'''No''' due to reasons of copyright (AI harvests copyrighted material) and verifiability. ] <small>(])</small> 18:12, 4 January 2025 (UTC) | |||
*'''No.''' Even if you are willing to ignore the inherently fraught nature of using AI-generated ''anything'' in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. ] (]) 19:53, 4 January 2025 (UTC) | |||
*:{{tpq|There's no guarantee the images will actually look like the person in question}} there is no guarantee ''any'' image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. ] (]) 20:39, 4 January 2025 (UTC) | |||
*Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—] <small>]/]</small> 01:17, 5 January 2025 (UTC) | |||
*:This subsection is about purely AI-generated works, not about AI-enhanced ones. ] (] · ]) 01:23, 5 January 2025 (UTC) | |||
*'''No.''' Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the ''subject''," - ] (]) 02:12, 5 January 2025 (UTC) | |||
*'''Yes''', depending on specific case. One can use drawings by artists, even such as ]. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of ]. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by ] would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. ] (]) 02:50, 5 January 2025 (UTC) {{pb | |||
}}This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. ] (]) 03:19, 5 January 2025 (UTC) | |||
* '''No''', I think there's legal and ethical issues here, especially with the current state of AI. ] ] 03:38, 5 January 2025 (UTC) | |||
*'''No''': Obviously, we shouldn't be using AI images to represent anyone. ] (]) 05:31, 5 January 2025 (UTC) | |||
*'''No''' Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. ] (]) 14:51, 5 January 2025 (UTC) | |||
*'''No''', as AI's grasp on the Internet takes hold stronger and stronger, it's important Misplaced Pages, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – ''']''' <sub>(]) (])</sub> 16:52, 5 January 2025 (UTC) | |||
*'''No''', not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. '''<span style="text-shadow:7px 7px 8px black; font-family:Papyrus">]<sup>]</sup></span>''' 20:19, 5 January 2025 (UTC) | |||
*'''No for natural images (e.g. photos of people)'''. Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. ] (]) 20:37, 5 January 2025 (UTC) | |||
*'''No''' I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. ] (]) 22:26, 5 January 2025 (UTC) | |||
*'''No''' I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. ] (]) 06:50, 6 January 2025 (UTC) | |||
*'''No''' - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had ] and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. ] (]) 02:31, 7 January 2025 (UTC) | |||
*:So you will be arguing for the removal of the lead images at ], ], etc. then? ] (]) 06:10, 7 January 2025 (UTC) | |||
*::At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. ] (]) 19:18, 7 January 2025 (UTC) | |||
*'''Strong no''' per bloodofox. —] (]'''-''']) 03:32, 7 January 2025 (UTC) | |||
:'''No''' for AI-generated BLP images ] (]) 21:40, 7 January 2025 (UTC) | |||
Look, could someone please start up a wiki where this stuff ''can'' be kept? Because I'm sick of this mania for removing anything that might actually be useful. -] 22:43, 2 August 2007 (UTC) | |||
*'''No''' - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 23:25, 7 January 2025 (UTC) | |||
*'''No''' – ] says that {{tq|Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people.}} While AI images may not be considered copyrightable, it still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if ''no'' images of the person were used, how on Earth would we trust the output?) ] (]) 02:43, 8 January 2025 (UTC) | |||
*'''No''', AI images should not be permitted on Misplaced Pages at all. ] (]) 11:27, 8 January 2025 (UTC) | |||
{{Archive bottom}} | |||
===Expiration date?=== | |||
:I agree only that there seems to be an unseemly amount of aggression involved in the campaign to remove this material. There seems to be a complete lack of attention to nuance involved.<p> There are articles where historical or legendary figures play such a large role in popular culture that an article such as these seems absolutely necessary, even if the current article sucks and is in dire need of improvement. (E.g. ]). With mythological or legendary creatures, current appearances and uses are in some sense as valid as "classical" ones, and should stay or merge (E.g. ]). Then there are ones that seem ridiculous, mostly because the title subject of the fork is popular culture in the first instance. (E.g. ]). People who want all of this material gone don't see a difference. <p>The anti popular-culture agenda has been misused. Notoriously so, in the case of ], where a group of his groupies refuse to allow his article to admit that he was satirized in the '']'' episode "]", despite the fact that ''South Park's'' audience exceeds that of Dawkins's scientific works or atheist screeds by at least a factor of ten.<p> I think it's time to step back from the whole business. Vague and litigious words like ''trivia'' and ''indiscriminate'' should not be used in guidelines. The wikilawyering that claims that recognizing allusions is "original research" needs to be fish-slapped; noticing these things is neither original to the editor who sees them, nor pushing an agenda in the typical case, and a citation to the work in which an allusion appears is reference enough. I'd be prepared to take the deletion of "trivia" more seriously if and when some greater sensitivity is shown to the variety of subjects involved here. - ] 19:43, 4 August 2007 (UTC) | |||
"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. ] (]) 23:01, 3 January 2025 (UTC) | |||
:The South Park episode is not in Dawkins' article because it is trivial. There is no evidence that it was significant enough to be there, especially not your made up number. Also, you showed how full of hatred you are, so have a beer and chill. Also, don't pretend you know something about science.--] 15:26, 16 August 2007 (UTC) | |||
*No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. ] (]) 05:27, 5 January 2025 (UTC) | |||
*An end date is a positive suggestion. Consensus systems like Misplaced Pages's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Misplaced Pages goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. ] (]) 10:22, 5 January 2025 (UTC) | |||
*Agree with FOARP, '''no need for an end date'''. If something significantly changes (e.g. reliable sources/news outlets such as the ''New York Times'', BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. ] (]) 11:39, 5 January 2025 (UTC) | |||
*:Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Misplaced Pages should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. ] (]) 03:07, 6 January 2025 (UTC) | |||
* ] on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. ] (]) 03:15, 6 January 2025 (UTC) | |||
* No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so. | |||
:Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 22:17, 9 January 2025 (UTC) | |||
== Should ] include mention of AI-generated comments? == | |||
'''As Wikipedians, we all agree''' to adhere to the encyclopedia's core values of verifiability and notability: our articles must be accurate and their subjects must be significant. We regularly delete articles that fail to demonstrate these tenants, why should we keep articles of ''list of things'' that fail to demonstrate them? I agree with ] that ''in popular culture'' sections are a bane to the encyclopedia, and that their systematic removal is in the best interest of our cause. | |||
:The first issue I take with ''in popular culture'' articles is their lax approach to verifiability. These articles accumulate vast amounts of original research as editors add in "popular interpretations" of symbolism and whatnot in media, art, and music. Connections are insinuated between unrelated items, without a proper source to defend them. For example: | |||
::From ]:"Some literary critics believe the conclusion of Andrew Marvell's 1681 poem "To His Coy Mistress" may allude to the phoenix, given its references to birds and fire," "In the anime series Beyblade, characters battle using a form of spinning top, many of which contain "bit-beasts" which are based on animals including mythological creatures. One such bit-beast is named Dranzer and is based on the Phoenix." | |||
::From ]:"The Fantastic Four are based loosely off elementals: the Human Torch and the Thing personify Fire and Earth, Mister Fantastic's fluid nature mimics Water, and the Invisible Woman can become as transparent as Air, in addition to her "invisible force" fields. In some continuities, their most recurring enemy, Dr. Doom, represented Metal and/or Lightning." | |||
::From ]:"In the movie Contact (1997), the character S.R. Hadden (played by John Hurt), responds to a comment about his technical abilities with the statement: 'Once upon a time, I was a hell of an engineer'. This is a reference to Georgia Tech's fight song, Ramblin' Wreck from Georgia Tech." | |||
::From ]:"There is also speculation that the second verse in John Denver's "Stonehaven Sunset" refers to Kent State." | |||
::From ]:"The city's road system, with its abundance of roundabouts and scarcity of traffic lights, is famously difficult to navigate for those unfamiliar with the city, while self-evident to locals. The resultant frustration for visiting motorists is almost certainly the origin of Milton Keynes' often surprisingly bitter reputation with out-of-towners." | |||
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies ). More fundamentally, ] can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor. | |||
:None of these claims are referenced. Some ''might'' be accurate representations of cultural ties, but who knows? If there's a source for the ''Contact'' quote that says, "Yeah, I like Georgia Tech. S'why I made that character reference it when I wrote the screenplay," then we've gone somewhere. As it stands, it might just be a regular guy saying he was good at his job. | |||
:The second, and far more important criticism I have of ''in popular culture'' articles deals with notability, though. Misplaced Pages has a policy of keeping minorly important people, things, and ideas out of the encyclopedia. This prevents us from downgrading into a social networking site or glorified blog. Why should the same rule not apply to ''in popular culture'' lists? The majority of references are of little significance. For example: | |||
::From ]:"In Charmed, The Source of All Evil is an elected (or descended) king of all the demons, comparable to the devil, which he is referred to as once in season one," "Him, a character on the animated series, The Powerpuff Girls, is a cheerfully evil, red-skinned, cross-dressing demon," "The adult animated comedy show Aaagh! It's the Mr. Hell Show is hosted by Mr Hell who bears a striking resemblance to Satan himself." | |||
::From ]: "Eiffel 65's song "I'm Blue" mentions a blue Corvette," "Gremlins, Gizmo drives a pink Corvette toy-car," "Malcolm McDowell drives a C3 Corvette in Blue Thunder." | |||
::From ]: "Will Truman (from Will & Grace) attended NYU Law," "In Clueless, Cher gives Josh advice: "I hear the girls at NYU aren't at all particular," "In Avenue Q, the song "There is Life Outside Your Apartment" mentions NYU." | |||
:Why are any of these things important? If I created ], it would be torn to shreds. And yet, the fact that NYU is mentioned in Avenue Q is worthy of inclusion? The same goes for ]. Easy deletion fodder, but ''individually mentioned,'' worthy of inclusion. There is no threshold of significance when the only qualifier for a pop-culture reference is that it a something appeared in a something else. ] would be of similar quality and theme, and deleted with impunity. | |||
:Yes, I understand that the same can't be said for every mention in every list. There are some references out there that are deliberate, sourced, and present some sort of literary or critical value. God help me, I can't ''find any'' at the moment, but I'm sure they exist. And when they do, I believe they should be included in the subject article. In the end, any reference that is both notable and verifiable can add value to the encyclopedia. Noting that Chipoltle restaurants once had a slogan on their bag claiming "our burritos go to eleven" does not. ] 02:12, 6 August 2007 (UTC) | |||
:As I just mentioned elsewhere, sometimes ir/relevance is obvious. Sometimes the trivia has to accumulate to a point where a pattern appears as to some of it being relevant or some being irrelevant. (] 01:08, 7 August 2007 (UTC)) | |||
:::That pattern isn't ''our'' job to determine, though. By asserting that our growing list of references in other media is a pattern of notability is ], unless there's ] that say it first. I'm not at all opposed to an article on a particular entity's effect on media, but that has to come from a ] and ] perspective. If an academic paper is published saying that the appearance of the ] in Deep Impact holds a deeper symbolism, that's one thing. If an author reveals in an interview that he chose phoenix-like imagery to connect his piece to mythology, that's alright. But to randomly chronicle every appearance, no matter how insignificant, is bad for the encyclopedia. ] 05:45, 7 August 2007 (UTC) | |||
::The real question is, what do you ''do'' with this sort of material? It has ever been my practice not to simply remove contributions that I find dubious or unimproving without first preserving the contested content on the talk page. It seems common courtesy, respectful of the contributions of others, and lets strangers to the difference of opinion know what we are talking about. <p>Instead, what seems to happen now is not this. The practice seems to have taken root to fork out these edits into separate "in popular culture" articles, which are then proposed for deletion ''en masse'' in episodic spasms. This is not as good, for multiple reasons; most seriously, it leads to the potential loss of valuable contributions.<p>] is a perfect example. As a mythical critter, the phoenix exists only as long as people remember and rehearse the legend to yet another generation. The use of the phoenix as a heraldic symbol belongs in the article. So, for that matter, do at least some appearances of the legend in well known works of fiction. We can trust people to use their brains; if something is called a phoenix and it seems associated with fire and rebirth, it belongs somehow.<p>No, not every such appearance should be catalogued; but deleting the data en masse and wiping it from the history is an even worse thing to do. - ] 04:46, 7 August 2007 (UTC) | |||
:::I feel no remorse for the removal of factoids and irrelevant trivia from the encyclopedia, but we can't keep bad content because someone ]. If the information is valuable, and by that I mean it has significance and is verifiable, then it belongs in an article, maybe even the main article. But we both know that it's not these brilliant gems of cultural weight that are getting these articles forked and deleted. I suspect that if we held popular culture references to a higher standard when determining when to include them, we'd never have a long, listy section that needed forking. ] 05:45, 7 August 2007 (UTC) | |||
:::::"I feel no remorse for the removal of factoids and irrelevant trivia from the encyclopedia, but we can't keep bad content because someone ]." Amen to that! ] 10:52, 11 August 2007 (UTC) | |||
Should ] be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? ] (]) 00:23, 2 January 2025 (UTC) | |||
::::::Years ago in Boston, there was a radio DJ who had a rock music trivia contest each morining. If you got his question correct, you could ask him your question. Each show, they said the same thing: "Trivia", not "trivial"! "What color shoes is Jimi Hendrix wearing on his last album cover"? didn't cut the mustard. I see many of these pop culture nuggets as fitting the "trivial" category. That makes me a deleter - Off with their heads!] 04:20, 20 August 2007 (UTC) | |||
* '''Yes''', I think this is a good idea. ] (]) 00:39, 2 January 2025 (UTC) | |||
===Another angle on this=== | |||
In the "X is mentioned in Y" sort of trivia, we already have a linkage on this, assuming that it's important enough for the article on "Y" to refer to "X" in its text. I refer to the "What links here" item in the toolbox. | |||
:'''No'''. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. ] (]) 01:23, 2 January 2025 (UTC) | |||
The phrase "in popular culture" is also a problem because it implies that these references are somehow special. I note that ], for instance, completely ignores this and puts in everything from the ]s to ]. I dont'see the need for the segregation, especially as it is apparently oft ignored anyway. ] 18:24, 17 August 2007 (UTC) | |||
::Note that this topic is discussing using AI to ''generate'' replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue. | |||
::] also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. ] (]) 01:32, 2 January 2025 (UTC) | |||
==Purpose of arbcom and resolving disputes== | |||
:::And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - ''some'' such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this ''will'' happen). ] (]) 02:34, 2 January 2025 (UTC) | |||
It seems that arbcom is technically on the list of solutions for dispute resolution. However, it apparently cannot resolve disputes. I propose changing this, because apparently, there are some cases when all other steps in dispute resolution just fail for one reason or another. Of corse, it should only be done only after all other measures in ] have been both tried and failed, and at the agreement of all involved parties to abide by the arbcom decision.--]<sup><small>]</small></sup> 23:22, 14 August 2007 (UTC) | |||
::::Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. ] (]) 03:31, 2 January 2025 (UTC) | |||
:::::Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. ] (]) 04:36, 2 January 2025 (UTC) | |||
:::::I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. '']'' (] — ]) 14:31, 2 January 2025 (UTC) | |||
::::I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in ] would cause actual harm? ] (]) 04:29, 2 January 2025 (UTC) | |||
:::::By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @] has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). ] (]) 04:33, 2 January 2025 (UTC) | |||
::::::I think {{u|bloodofox}}'s ] was about "you" in the rhetorical sense, not "you" as in Thryduulf. ] (]) 11:06, 2 January 2025 (UTC) | |||
:::::Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Misplaced Pages to be incredibly insulting and offensive. ] (]) 04:38, 2 January 2025 (UTC) | |||
::::::My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. ] (]) 04:43, 2 January 2025 (UTC) | |||
:::::::Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? ] (]) 04:53, 2 January 2025 (UTC) | |||
::::::::I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them. | |||
::::::::I'm not mocking anybody, nor am I advocating to {{tpq|let chatbots run rampant}}. I'm utterly confused why you think I might advocate for selling Misplaced Pages to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. ] (]) 05:01, 2 January 2025 (UTC) | |||
:::::::::So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. ] (]) 05:13, 2 January 2025 (UTC) | |||
::::::::::No, this is not a {{tpq|everyone else is the problem, not me}} issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue. | |||
::::::::::I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter. | |||
::::::::::AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. ] (]) 12:09, 2 January 2025 (UTC) | |||
:::::::::::In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Misplaced Pages's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down. | |||
:::::::::::In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article. | |||
:::::::::::It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. ] (]) 22:44, 2 January 2025 (UTC) | |||
::::::::::::{{tq|LLMs don't understand Misplaced Pages's policies and norms}} They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Misplaced Pages does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Misplaced Pages. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:33, 15 January 2025 (UTC) | |||
:::::::You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. ] (]) 14:15, 14 January 2025 (UTC) | |||
::::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is simply | |||
::::::::FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying ]", and the context of that was ] computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Misplaced Pages in these very discussions. ] (]) 14:47, 14 January 2025 (UTC) | |||
::::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is factually incorrect. | |||
::::::::FUD both predates AI by many decades (indeed if you'd bothered to read the ] article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like ]), examples can be found in these sprawling discussions from those opposing AI use on Misplaced Pages. ] (]) 14:52, 14 January 2025 (UTC) | |||
:'''Not really''' – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a ''blanket'' assumption that using AI to generate comments is not showing good faith. '']'' (] — ]) 02:35, 2 January 2025 (UTC) | |||
:Okay, what are you basing these comments on? ](]) 23:26, 14 August 2007 (UTC) | |||
*'''Yes''' because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly ''what'' AGF should say needs work, but something needs to be said, and <s>AGF</s>DGF is a good place to do it. ] (]) 02:56, 2 January 2025 (UTC) | |||
*:Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. ] (]) 03:01, 2 January 2025 (UTC) | |||
:::Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. ] (]) 03:27, 2 January 2025 (UTC) | |||
::::That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. ] (]) 04:25, 2 January 2025 (UTC) | |||
:::::I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. ] (]) 04:34, 2 January 2025 (UTC) | |||
::::::How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? ] (]) 04:39, 2 January 2025 (UTC) | |||
:::::::Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. ] (]) 04:40, 2 January 2025 (UTC) | |||
::::::::You are entitled to that philosophy, but that doesn't actually answer any of my questions. ] (]) 04:45, 2 January 2025 (UTC) | |||
:::::::"why does it matter if it was AI generated or not?" | |||
:::::::Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them. | |||
:::::::"How will they be enforceable? " | |||
:::::::] isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. ] (]) 05:16, 2 January 2025 (UTC) | |||
:The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (]). The AI was at least superficially polite. ] (]) 04:27, 2 January 2025 (UTC) | |||
::Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "" and "merely" reiterating what other sources have written. | |||
::Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which ''looks'' ok. Except it only superficially ''looks'' ok, it doesn't actually accurately describe the articles. ] (]) 04:59, 2 January 2025 (UTC) | |||
::Currently I, and many other editors seem to be unable to reach a consensus over the allegations of apartheid articles, and we seem to be unable to reach a solution through regular means in ] (only an unresolved content dispute). See ]. At first, I thought that was the purpose of arbcom, but apparently it isn't, and apparently, a lot of other people involved in the dispute thought so as well.--]<sup><small>]</small></sup> 23:32, 14 August 2007 (UTC) | |||
:::Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially ''look'' OK but don't actually accurately relate to anything they are responding to. ] (]) 05:03, 2 January 2025 (UTC) | |||
:::The ArbCom doesn't, and never has, arbitrated content disputes except in rare exceptions. ArbCom deals with user conduct. ] ] 00:08, 15 August 2007 (UTC) | |||
::::But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. ] (]) 05:09, 2 January 2025 (UTC) | |||
::::I know that, but there appears to be some confusion by many parties about this, and it seems like a group of highly respected editors who can make decisions over content disputes might be in order, which is partially why I am proposing this change.--]<sup><small>]</small></sup> 02:29, 15 August 2007 (UTC) | |||
:::::True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. ] (]) 07:54, 2 January 2025 (UTC) | |||
::::::All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say. | |||
::::::"Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also ''sounds good'', until you realize that the bot is actually criticizing its own original post. | |||
::::::The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. ] (]) 08:33, 2 January 2025 (UTC) | |||
:::::::I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no ], and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain. | |||
:::::::But... do you actually think they're doing this for the purpose of ''intentionally'' harming Misplaced Pages? Or could this be explained by other motivations? ] – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something ]- and even block-worthy) reasons. ] (]) 08:49, 2 January 2025 (UTC) | |||
::::::::The ] has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below '''in your own words'''" | |||
::::::::Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. ] (]) 09:35, 2 January 2025 (UTC) | |||
:::::::::] means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. ] (]) 07:54, 3 January 2025 (UTC) | |||
::::::::::"Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. ] (]) 16:08, 3 January 2025 (UTC) | |||
:::::::::::It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock. | |||
:::::::::::But I wonder if you have read AGF recently. The first sentence is "'''Assuming good faith''' ('''AGF''') means assuming that people are not deliberately ''trying'' to hurt Misplaced Pages, even when their actions are harmful." | |||
:::::::::::So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Misplaced Pages. I might not be successful, but I sure am going to try hard to reach my goal"? ] (]) 23:17, 4 January 2025 (UTC) | |||
::::::::::::Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. ] (]) 23:27, 4 January 2025 (UTC) | |||
:::::::::::::Sure, I'd count that as a case of "trying to hurt Misplaced Pages-the-community". ] (]) 06:10, 5 January 2025 (UTC) | |||
* The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. ] (]) 04:45, 2 January 2025 (UTC) | |||
I agree completely with Sefringle., We desperately need some higher court of appeal for resolving content disputes. Otherwise what are dispute resolutions ''for''? Not all disputes can be solved by addressing user conduct. Sometimes, both sides show good etiquette, but simply cannot come to resolution about some highly volatile issue. --] 02:38, 15 August 2007 (UTC) | |||
*:In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 05:02, 2 January 2025 (UTC) | |||
*::Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. ] (]) 05:07, 2 January 2025 (UTC) | |||
:That would be a really bad idea to have some higher authority empowered to make decisions on content. Work it out; I know it's not easy, but usually if users behave then discussions can lead to reasonable compromises or some sort of consensus. ] 02:44, 15 August 2007 (UTC) | |||
:: |
*:::All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 05:09, 2 January 2025 (UTC) | ||
*::::Sure, but ] doesn't mention any unhelpful rhetorical patterns. ] (]) 05:32, 2 January 2025 (UTC) | |||
*::::The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. ] (]) 05:38, 2 January 2025 (UTC) | |||
:::Sure, it happens. So let it happen. I'd rather see an infinite edit war than a set of people empowered to make content decisions. ] 02:50, 15 August 2007 (UTC) | |||
*:::::...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? ] (]) 06:19, 2 January 2025 (UTC) | |||
*::::::Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. ] (]) 06:23, 2 January 2025 (UTC) | |||
::I see nothing wrong with giving some people the power to make content decisions. Besides, no process is immune from appeal, even at the highest levels; there is always the ability to simply discuss it individually, at the talk pages of each of the individual arbiters. | |||
*:::::::This is just semantics. | |||
*:::::::For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article. | |||
::By the way, I'd much rather see final decisions made on content, rather than on individual editors' status, like we have now. | |||
*:::::::The only difference between these four sentences is that two of them are more annoying to type than the other two. ] (]) 08:08, 2 January 2025 (UTC) | |||
*::::::::Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? ] (]) 09:11, 2 January 2025 (UTC) | |||
::Also, the currrent system is creating a direct incentive for editors to hurl accusations and counter-accusations, since that is the only way to pursue these matters, according to the official procedures themselves. | |||
*:::::::::Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. ] (]) 11:59, 2 January 2025 (UTC) | |||
*::::::::::LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user , as well as started and , all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. ] (]) 21:44, 2 January 2025 (UTC) | |||
::By the way, Dickylon, actually I'd rather see a set of people empowered to make content decisions, than an infinite edit war. --] 03:20, 15 August 2007 (UTC) | |||
*:::::::::::LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. ] (]) 21:56, 2 January 2025 (UTC) | |||
*::::::::::::A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. ] (]) 23:09, 2 January 2025 (UTC) | |||
I strongly disagree with this. It goes against a fundamental part of Misplaced Pages which is that content disputes are resolved through discussion and concensus and not by allowing certain editors to make excecutive decisions on content. I recall a comment by Jimbo that even ''he'' was scared to edit Nupedia. Also, this seems a bit ] to me. ] 06:14, 15 August 2007 (UTC) | |||
=== Content or respect of wp principles ? === | |||
It is obvious that giving a ''court'' the right to decide on content is dangerous and could give more bad results that good ones (censorship - oriented editorial lines) but could not a ''court'' state that some choices do not respect[REDACTED] principles ?<br/> | |||
For example, if it is clear that a ''court'' cannot decide about the reality or pertinence of an ''information'', cannot it take decisions or give advices concerning the formulation's compliance with fundamental principles ? ] 09:18, 15 August 2007 (UTC) | |||
:Excellent point by Alithien. i agree completely, and feel this is an extremely fair and reasonable idea to add to the sturcture and format of dispute resolution processes. --] 16:02, 15 August 2007 (UTC) | |||
No need. If the issue is whether a source is reliable, ask at the reliable sources noticeboard. If the issue is BLP, ask at the BLP noticeboard. If we come up with another type of issue for which clearly correct answers are likely to be forthcoming and that regularly occurs, we'll set up another noticeboard for that type of issue. Those are good, functioning, and non-court like mechanisms that give advice. Article RfCs sometimes succeed - and would more often if more editors paid attention to them. | |||
Where things fail is where there are large factions of strongly opinionated pro/anti editors some of whom are not dedicated to NPOV. Group dynamics make achievement of consensus very difficult until there is an agreement to seek NPOV. Sometimes that requires weeding editors who really don't want an NPOV article out of the discussion. ] 16:09, 15 August 2007 (UTC) | |||
:Thanks for your reply. however, that's an interesting idea. i guess you feel that weeding out those POV editors won't involve any further controversy, and would totally solve the problem? not sure I agree. --] 16:12, 15 August 2007 (UTC) | |||
::Nothing will "solve" the problem of editing tensions on controversial articles; this tension is one of the keys to Misplaced Pages and how it functions. On the other hand, if tensions go beyond acceptable give-and-take to abusiveness, ArbCom steps in to address the offending behavior. A "content ArbCom" would be unlikely to work, because it would require people who are a) willing to take the inevitable abuse, b) highly experienced Wikipedians, and c) thought to be impartial on all possible content matters by most or all of the community. Few, if any, such people exist. And look what happens to impartial arbiters: ], a truly uninvolved user without a horse in the race, closed the DRV on an allegations of apartheid article. Within moments, he was being savaged by the side that didn't like his decision as biased, deletionist, not having enough article-writing experience, etc. The problem is the ''behavior'' and the ''atmosphere'', not the existence of controversy. ''']''' <sup>]</sup> 17:44, 15 August 2007 (UTC) | |||
:::The issue though, is not always about the behavior. Editors shouldn't necessarily be weeded out because they have a particular view, unless they prove themselves unwilling and unable to compromise, and sometimes that judgement is made too soon. Sometimes the actual problem is the dispute, and censorship of opposing views is not necessarily the best solution. A trial of the editors is not necessarily what is always needed; not when we are facing content disputes, especially ones which harm wikipedia's value system.--]<sup><small>]</small></sup> 02:10, 16 August 2007 (UTC) | |||
:::Well. I am very interested by the history of 1948 and I am really fed-up to discuss with "pov-pushers" who never even read a book about the topic and who comes and add material destroying good work. And I am not the only one concerned. | |||
:::When[REDACTED] community will decide to support contributor vs pov-pusher, then signal it. | |||
:::] 18:34, 15 August 2007 (UTC) | |||
::::Yes. I am very interested in medical and health-related topics, and I get really fed-up with POV-pushers who lack knowledge, perspective, or experience and come along and destroy good work or maintain misinformation. But that is the price of working on an encyclopedia which anyone can edit. In general, the community is pretty good about supporting contributors over POV-pushers (with a few exceptions), though resolving such issues often takes longer than I'd like. ''']''' <sup>]</sup> 22:10, 15 August 2007 (UTC) | |||
:::::No. It doesn't take longer than you would like. | |||
:::::It is simply not done. | |||
:::::Because when pov-pushers are clever enough not to insult, they are just not stopped. | |||
:::::On the topics related to the israeli-palestinian conflict this is clear and well known. | |||
:::::So, if the community doesn't want to act, at least, the minimum would be to write : YES,[REDACTED] is unable to deal with that, that is the reasons why Citizendium appeared. | |||
:::::No regards, ] 06:36, 16 August 2007 (UTC) | |||
I don't like it, it's ]y, and reminds me of something out of ]. ''All editors are equal, but some editors are more equal that others.'' Not a path we want to start down. -- ] 18:19, 17 August 2007 (UTC) | |||
*:::::::::I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of ]. ] (]) 16:47, 2 January 2025 (UTC) | |||
Honestly, we'll give arbcom the power to block editors who they believe are causing the problem, and thus we give them the power to decide who wins the dispute, since they can just block the opposition, yet we won't let them just resolve the dispute by executive order. Seems a bit ironic.--]<sup><small>]</small></sup> 04:22, 18 August 2007 (UTC) | |||
*::::::::::] is not a death pact though. At times you should be suspicious. Do you think that if a user, ''who you already have suspicions of'', is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? ] (]) 21:44, 2 January 2025 (UTC) | |||
*:::::::::::So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. ] (]) 21:57, 2 January 2025 (UTC) | |||
:Not really. If they were to resolve the dispute by fiat, then no future editor could change the article. But by getting rid of editors who fail "plays well with others", content-related discussion is merely postponed until another editor comes along to take up the "defeated" side of the dispute. --] 08:26, 21 August 2007 (UTC) | |||
*::::::::::::As the person ] demonstrates, you can't "just stop engaging them". When they then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. ] (]) 22:57, 2 January 2025 (UTC) | |||
*:::::::::::::That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. ] (]) 00:33, 3 January 2025 (UTC) | |||
:It's really quite simple. Minorities don't get special treatment. Majorities don't get special treatment. No one gets special treatment because we are ] and ]. Article content is determined through consensus and our policies and guidelines. Those who do not wish to play by those rules are removed from the articles in question by ArbCom. Those who wish to play by the rules but disagree with the current consensus can seek dispute resolution through RFC's and mediation or general feedback through our noticeboards. Misplaced Pages does not and should never hold any paticular view on a subject while banning those who disagree with that view. There are other Wikis for that kind of thing, like ]. ] 10:01, 21 August 2007 (UTC) | |||
*::::::::::::::I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. ] (]) 01:09, 3 January 2025 (UTC) | |||
== Overciting and FA == | |||
An FA reviewer of ], an article I edited heavily and nominated for FA, has commented that I seem to have "gone nuts" with the footnotes, to the point that it impedes readability. It is true that almost every sentence is footnoted, and a few have two (to avoid mid-sentence citations). However, I felt like having a footnote on almost every sentence was the best way to avoid complaints that such-and-such wasn't cited properly, especially since I've had to return most of the sources to libraries far and wide by now. This mentality was reinforced by the GA reviewer of ], who stated: "Presumably you'll be taking this article to FAC in the near future. If so, then building up your citations to the "almost every sentence" point certainly won't hurt anything." | |||
I've already used paragraph-at-a-time citation where every sentence in the paragraph comes from the same source. The FA reviewer on the Hawes article suggested that when most of the citations in a paragraph are from the same source, that I should move all of them to the end of the paragraph. While this probably would improve readability, I'm not sure it's in line with citation policies, a subject on which I am clearly not an expert. So I refer the matter to my more experienced and better versed wiki-brethren (and sistren?). What say you? <span id="{{{User|Acdixon}}}" class="plainlinks" style="color:#002bb8">] <sup>(] <small>•</small> ] <small>•</small> )</span></sup> 13:39, 15 August 2007 (UTC) | |||
:Well, we don't have a citation policy, only a style guidelines (]), but if you've sherperded articles to GA and towards FA, I'm guessing you know that. Based on my academic work, I would say that any source used heavily throughout a paragraph should be at the end of it, but you should only have two citations at the end of a paragraph if they both support the same thing(s), albeit with different details. Sentences within such paragraphs ''not'' supported by the paragraph citation(s) should be separately cited, and of course all quotations should be cited immediately. ](]) 13:44, 15 August 2007 (UTC) | |||
The first paragraph of the main body is clearly overcited. If any of the general references listed at the bottom of the article will support the fact, it doesn't need a sentence-specific citation (for example, names of parents) A cite at the end of that paragraph would be fine, but I wouldn't even consider that those sentences needed cites. ] (]/]) 14:43, 15 August 2007 (UTC) | |||
:I keep hoping that someday Misplaced Pages will hide footnotes by default so that the general reader doesn't have to look at them, but they can be "turned on" by anyone who wants to see them. —] ] 14:57, 15 August 2007 (UTC) | |||
Looks good to me, actually. It's not like you're citing every sentence to the same ref. It doesn't impede readability to me: I just skim over the blue numbers...--] 17:13, 15 August 2007 (UTC) | |||
:I was the one that raised objections about overciting at the FAC in question. There are 2 things I would like to say: | |||
:*It is not inconsistant to both require cites and require that they be well-organized. Per ] (also read ],) section titled '''Text-source relationship''' says it well, I think. | |||
:*To drop a footnote after every sentance simply to head off problems seems rather ]y, like saying "I know that those cite-nazis are going to require more footnotes, so I'll show them. I'll cite every sentance..." Quite frankly, I have NEVER seen a blanket requirement that every sentance be cited. As a rule of thumb, I have NEVER seen a single objection where an FA has a cite 1) At the end of each paragraph 2) Following each quote 3) After statements of opinion (such as "Historians have found that"... or "Critics have said that") or 4) After superlative statements (so-and-so is the largest, best selling, etc, etc.) It seems to assume bad faith in the people who will be reviewing your article to think in advance they will have unreasonable objections and then simply attempt to head off those objections by being overzealous. | |||
:I frequently request that articles be better cited. Nearly every time I have failed a GA nomination OR have objected to an FAC it has been because of not using inline citations enough. However, the article in question seems to go beyond prudent citation to the point of being, well, ]y... --]|]|] 01:41, 16 August 2007 (UTC) | |||
::Are you sure its wise to be invoking WP:POINT here? I'm sure you're just misusing it, because accusing someone of violating it is a complete assumption of bad faith. What I think you mean to say is that this article is just over the top in its citations. The version I'm looking at right now has an acceptable number of citations, but only just. ] 03:24, 16 August 2007 (UTC) | |||
:::I didn't want to specifically accuse anyone of being disruptive per se, but several people have implied, both here and at the FAC in question, that the addition of the citations was not because they felt the article honestly needed them, but because they felt that there were people out there who would unreasonably hold up an otherwise FA worthy article for a frivilous reason. To intentionally overcite an article since you assume that people will object to a reasonably cited article seems an assumption of bad faith at the least, and possibly point-making of the worst kind... I do want to say emphatically that I do NOT accuse Acdixon of doing this. I think its much more a case of his receiving bad advice from people who themselves wish to make a point; that there are people who believe that there are editors out there who will only accept an article that is referenced at every sentance. Those people advise that it is better to just reference every sentance before trying FAC; I am denying that this has ever been the case. Such requirements have never existed and have never been asked for by any one commenting on an FAC. Yes, often articles arive at FAC that need many more references, but rather than try to use references appropriately, it seems that some would rather over-use footnotes so that no one can complain about a "lack" of citations. That misses the point entirely. There is a right way to do it; and it is my opinion that this is not it. Footnotes should be applied intelligently as needed, articles that footnote every single sentance are just as bad as ones that don't use them at all. Using them everywhere is not the same as using them correctly. --]|]|] 03:53, 16 August 2007 (UTC) | |||
::::I'm glad I didn't comment on this last night as I had intended, because I did feel like I was being accused. I am glad to see this morning that Jayron32 understands my predicament. This is only my 3rd FA nom, and though the first two passed, it was only after significant contention, so I guess you could say I'm a little over-cautious at this point. | |||
::::On my talk page, Raul also pointed me to ]. Unfortunately, that hasn't helped a whole lot. I don't think the first two points under "When not to cite" are applicable to ], as very few people know there was such a person in the first place. The third one is the one I'm most concerned about. I've been known to rearrange material in ''an article that I wrote'' and accidentally leave the cite behind. I felt like citing every sentence would avoid that. Also, the guideline of "material that is challenged or likely to be challenged" is a little too ] for my very ] brain. I realize that the guideline has to be general enough to apply to a variety of articles, but I still find this a difficult judgment call. | |||
::::I think Kevin Myers above has the best solution – some way to show/hide footnotes. How do we get this idea to the "Keepers of the Wiki?" <span id="{{{User|Acdixon}}}" class="plainlinks" style="color:#002bb8">] <sup>(] <small>•</small> ] <small>•</small> )</span></sup> 11:57, 16 August 2007 (UTC) | |||
*The point is that using those in-line superscripted numerical links overly much impedes legibility of the article. ] 10:44, 16 August 2007 (UTC) | |||
*I wasn't going to comment on this, but I can't keep from it. I agree with ] on this topic. I tend to skim over the footnotes anyway, and the article doesn't feel overcited to me. I often like to check verifiability of statements mentioned in articles, and I like to know the original source for that particular statement. I don't like having to go to the end of the paragraph and dig through multiple references to find it in the original source. This is my opinion, obviously, but I believe the article is fine as is. Misplaced Pages, please make it to where you can show/hide footnotes! -- ''Steven Williamson (])'' - ] ] 14:54, 17 August 2007 (UTC) | |||
:I also personally prefer to err on the side of over-citation. I've been in the situation of needing to go back to original source material to verify things, and tying sources closely to their related statements saved a lot of time and effort there. In addition to helping the ] of the article at hand, it helps in keeping track of which sources may be useful for related articles. As for hiding the superscripted references, if a logged-in user really wants to do that, the following could be added to their monobook.css: | |||
:<pre>.reference { display: none; }</pre> | |||
:— ]::''']''' 15:09, 17 August 2007 (UTC) | |||
::*I also think over-citing makes an article almost unreadable. I have quite a trouble trying to read something like this. Mostly because it is on a computer monitor, my eyes keep being "pulled up" to the previous line. Telling a passing by reader that the way to avoid that is "register, log-in when reading, learn CSS, learn how to use it on WP" is asking too much, isn't it? - ] 14:43, 21 August 2007 (UTC) | |||
:* Citation in ] is great. I don't think citing every sentence should be a requirement, but there is certainly nothing wrong with doing so. I don't understand why anyone would object to it. How on earth does it impede readability? I dislike putting more than one note after any sentence - multiple references for one point ought to be in the same note as happens in published material - but one little superscript numeral has no effect on readability. People who object to articles at FA or GA on those grounds should be banned from reviewing on said locations. :) Well, that's just my opinion anyways. ] (<small>]</small>) 15:24, 17 August 2007 (UTC) | |||
:Honestly, I have been mulling this over for a few days... needed some distance from it to think it through. What it seems I was mainly upset about was not the article, or Acdixon, but rather the mentality that people who require a well-referenced article are being unreasonable in doing so. Several comments made around the periphery of this FAC seemed to indicate that there are people who advocate citing every sentance, not because they believe that it should be done that way, but precisely because they believe it SHOULDN'T be done that way; there is group of editors at Misplaced Pages who feel very strongly AGAINST footnoting. I was really projecting my displeasure against that group, the anti-referencing crowd; really this article is quite good, and the issue is small; honestly I would MUCH rather an article be overcited rather than undercited; in the best case scenario articles would simply be CORRECTLY cited. However, I can see no further reason to withold my support for the article; I am glad my comments generated further discussion, and I see that as a good thing. I am sorry if I got unreasonable; hopefully you all will forgive me... --]|]|] 01:27, 18 August 2007 (UTC) | |||
::No offense taken here, Jayron32, and I wholeheartedly agree that the discussion was beneficial on all parts. Anything that makes us look at ways to make Misplaced Pages better has got to be a good thing, right? <span id="{{{User|Acdixon}}}" class="plainlinks" style="color:#002bb8">] <sup>(] <small>•</small> ] <small>•</small> )</span></sup> 15:57, 18 August 2007 (UTC) | |||
:: I think the mentality is not that "people who require a well-referenced article are being unreasonable in doing so", but that "people who believe that citations require locality at the sentence level are being unreasonable in thinking that that is a requirement for an article to be well-referenced." It absolutely infuriates me when someone dings an FA (or adds the dreaded "citation needed" tag all over an article) when the citations in the article ''already address the point'', because it's a clear indication that the dinger has not, and probably will never, actually checked the existing citations. ] 05:31, 20 August 2007 (UTC) | |||
As a disinterested reader, I have to say that to my eyes the article is extremely overcited. I don't see the need for a citation for the subject's nother's name, or the other totally uncontroversial biographical information. And while I'm used to reading scientific papers, I did find the constant footnote numbers distracting. I though I was going to come down on the side of the author, but I'm surprised to find myself itching to pick up a red pencil. If it were me, I would strike out at least three quarters of the citations, and that's a conservative guess. The article itself looks like a good one, stained by the excessive caution of the author. A citation or two in each paragraph shows that someone has made a good effort, and a good reference list at the end supports that impression. Anything more is just make-work to this reader. ] 04:50, 20 August 2007 (UTC) | |||
== Funny fonts and policy pages == | |||
Sorry if this is too trivial or has been discussed before, but do we have a style guideline for how to write policy and guideline pages? | |||
I'm thinking, when to use bolds or italics, standard table and bullet point styles, how to distinguish the effective part of the policy from (i) justifications of the policy, (ii) explanations of how to comply, (iii) citations or quotes form other policy pages, and (iv) examples. For example, ] is pretty long and messy, and looks like it could use better headings and organization, and perhaps breaking the "law" section out to an essay or separate page. ] has it all, bold, italics, two tables, and (<span style="color:green">green text!</span>) | |||
Maybe some more serious requirements, like when to enumerate things versus when to use a category, passive voice versus active or imperative, conditional versus certain. The notion that policy pages are for broad universal rules, guideline pages for how those rules apply to different situations (or is that really the case?). To make up an example, "Spam should not be placed in the main space, talk pages, user pages, or anywhere else on Misplaced Pages" should read "Spam is not allowed on any Misplaced Pages page." | |||
Is that sort of thing collected anywhere and should it be? ] 17:14, 17 August 2007 (UTC) | |||
:It does seem like a good idea to have a style guideline for how to write policy and guideline pages. In the meantime, a bit of searching from place to place in MOS, and elsewhere is what is required. Would be nice in one place. ] - ] 05:17, 18 August 2007 (UTC) | |||
*The short answers are that (1) we have too much policy to begin with, and (2) {{tl|sofixit}}. ] 13:20, 20 August 2007 (UTC) | |||
**Thanks for reminding me...yes, too much policy. But perhaps a little meta-policy in guideline form could be an antidote to policy glut? I'm working on a draft proposal but it's not ready to show yet. ] 15:41, 20 August 2007 (UTC) | |||
== Article Removal == | |||
I have a question about article removal. A company that I work for has an article on Misplaced Pages that meets the notability guidelines for an article and is a relatively extensive article. I was recently asked, because I use[REDACTED] frequently, if the company were unhappy with the article, would it be able to have the article removed? If so, how could that be accomplished? ] 04:40, 18 August 2007 (UTC) | |||
:No, if the article met notability guidelines then it would not be removed. However, pressure from the company it's about could result in removal of all unreferenced statements from the article. —] <sup>(])</sup> 04:45, 18 August 2007 (UTC) | |||
:That would be difficult without a specific concern to address. — ] (] | ]) 04:46, 18 August 2007 (UTC) | |||
Thank you very much for your response. On a related issue, if the article in question were rated as GA quality, would removal of any unsourced statement by editors, without pressure from the company, be considered vandalism? Or, would this constitute keeping the article clean?] 04:54, 18 August 2007 (UTC) | |||
:Any unsourced statement believed to be untrue may be removed at any time by any editor. However, make sure to explain this in the ] or else it may be misinterpreted as vandalism. —] <sup>(])</sup> 04:59, 18 August 2007 (UTC) | |||
::Thank you again to both of you for your helpful and prompt responses. I think this will help me reassure my employer about the content of our article.] 05:01, 18 August 2007 (UTC) | |||
::*On the one hand, see ], which is a convenient way of reporting problems in articles about you or your company. On the other hand, see ], our guideline on conflict of interest (some companies are attempting to use Misplaced Pages for advertising, which is inappropriate). HTH! ] 13:19, 20 August 2007 (UTC) | |||
== Non-free album artwork in Song articles == | |||
Is it acceptable fair use of non-free images if album cover artwork is included in articles about songs on an album. An example is ] in ]. I would suggest that it is not fair use, as the image is not being used to illustrate the song itself. This has been discussed briefly ], but no real consensus was reached. I think this needs to be clarified as it affects a huge number of articles. Thanks ] 14:22, 18 August 2007 (UTC) | |||
:I don't even find it appropriate in album articles unless the ''cover art itself'' is a notable part of the album, I'd certainly say the same for songs. ] <small><sup>]</sup></small> 21:34, 18 August 2007 (UTC) | |||
::I sure disagree. The cover art is essentially the only thing that can illustrate the album. I think it's fair to say that for songs, too, unless they had a single cover. Ask ]. ←] 22:08, 18 August 2007 (UTC) | |||
* I'm hard pressed to see a reason why we would care. It seems to be working fine. The musicians put art on the covers to attract attention to their products and their careers. Having this art at WP atracts attention to their music and careers. Everybody wins but the anal rule enforcers/creators. Please let sleeping dog lie and turn up the tunes. --] 22:13, 18 August 2007 (UTC) | |||
* Here is the text from the : | |||
::''Section 107 contains a list of the various purposes for which the reproduction of a particular work may be considered “fair,” such as criticism, comment, news reporting, teaching, scholarship, and research. Section 107 also sets out four factors to be considered in determining whether or not a particular use is fair: | |||
::''1. the purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes;'' | |||
::''2. the nature of the copyrighted work; | |||
'' | |||
::''3. amount and substantiality of the portion used in relation to the copyrighted work as a whole; and'' | |||
::''4. the effect of the use upon the potential market for or value of the copyrighted work''. | |||
::''The distinction between “fair use” and infringement may be unclear and not easily defined. There is no specific number of words, lines, or notes that may safely be taken without permission.'' | |||
:So even the Copyright Office says that this is not a bright line. In my view, using an album cover to illustrate a song satisfies all the four points: (1) Misplaced Pages is nonprofit and for education purposes, (2) and (3) are both answered by the fact that the album cover is intended by the copyright owner for display to the public at no charge before the work is purchased and that only the outer cover is used, not the entire package graphics and text. This would be different if the artwork were used to illustrate something unrelated to the music an on the album, for example an album by Madonna used to illustrate an article about "Nightclubs" would not be fair use. (4) is answered by the fact that album copyright owners actively encourage the display of their album artwork everywhere and that there is no way its display on Misplaced Pages would reduce the value of their product. This is in contrast to, for example, if there were a bonus fold-out poster of the artist inside the CD package - that would be something of value intended only for purchasers, such that if it were displayed would reduce the value of the package. | |||
:For those reasons, I believe displaying album cover artwork to illustrate the following topics is valid under fair use: The album; songs from the album; the recording artist; and the record company (if the record company is the copyright owner). | |||
:These are my personal opinions, having done some study of Intellectual Property issues. As said, I'm not an attorney, but I can't imagine a copyright owner of an album complaining about exposure for their music by display of the album cover. Examples are everywhere on the web, wherever there are reviews, there are album covers, and there are no cease and desist letters or lawsuits about those things, the record companies welcome it. | |||
:I suggest that ] policy is unclear on this and needs to be improved. I'm sure that there are attorney editors who would be glad to help with interpretations and to clarify the language. --] ] 23:22, 18 August 2007 (UTC) | |||
:The question here is not fair use, directly. Misplaced Pages has policies that are more restrictive than the fair use limits, for two or three main reasons. One is to steer well clear of copyright liability rather than pushing any limits, keeping in mind that Misplaced Pages has a small legal budget, that it intends the articles and material in them to be re-used by people with very different purposes (including creating derivative works), and that these uses may be in different copyright jurisdictions. Another main reason is to limit the use of copyrighted content overall. And finally, Misplaced Pages hopes to encourage people to develop free use content, either by creating original material or finding public domain things. | |||
:Under ''Misplaced Pages's limitations'' -- not fair use necessarily -- an album cover to identify a single from the album is probably not a good use. This boils down to criterion #1 (replaceability) and #8 (significance) of the 10 non-free use criteria at ]. It does not uniquely identify the song. It's there mostly as a visual device, not a necessity. In fact, whereas most album articles do have the cover art for the album in the infobox, you'll find that most song articles here do not use that kind of picture and they do just fine. It's not a question fo can you do it legally under fair use, it's do you really need it. My hunch is, no. | |||
:If you don't like the policy this is as good as any a place to talk about it but it's very entrenched and I do not see it changing soon. If you want to know what the policy ''is'' instead of what you think it ''should be'', ] and its often contentious talk pages are a good place to read up. Be sure to check through the archives. Hope that helps. ] 23:38, 18 August 2007 (UTC) | |||
:: I'm not agitating for change about this, I was just offering my view based on the prior question. My personal opinion is that displaying album covers is much ado about nothing, because the copyright owners love it when their album covers are made visible. I've seen the pages you refer to, and I respect that others have other concerns about keep all uses free. I'll leave that debate to them. Thanks for your reply. --] ] 22:03, 19 August 2007 (UTC) | |||
:Using cover art in the album article makes sense, maybe even for singles. However when the article is about a '''song''' rater than any particular single release of it I don't see what business we have putting cover art into the article. Now granted people tend to "work around" that by simply dedicating large portions of the article to the various single and cover releases of the song, but asuming the article is actualy mostly about the song itself I would say you need to carefully explain why it's needed if you want to add an image to it, albumcovers can rarely be said to identify the song, a particular release of a song yes, but rarely the song itself (over the years a song is usualy included in any number of releases with all sorts of different cover art), and you would need more than a brief mention of a particular release in order to justify an identifying image of it in a song article. --] <span style="font-size:75%">]</span> 21:33, 22 August 2007 (UTC) | |||
== Mobbing == | |||
Is there any rule against playing out the 3RR by simply calling up wiki-friends to help "revert the POV"? I mean when user:X (sorry if this exists) dislikes Y's edits, as usual, calles it POV or original research, whatever - you all know this situation -, then when a revertwar erupts, and he's done with his 3 rvs, calls his mates to "finish the job", and play that "enemy" out, then of course report Y as 3RR offender. Is there any? Cos' I accidentally just found such a case... And in fact 4 others (!!) since... by checking the 3RR report page's content, reported (and reporter) users' actions, and reported pages' histories. --] 18:32, 18 August 2007 (UTC) | |||
: Some call it a cabal, and it is a very effective way for a special interest group to hijack and article or manipulate WP policies. Sad but true. --] 18:35, 18 August 2007 (UTC) | |||
:: See ] and other[REDACTED] policies. Meatpuppets are not to be tolerated any more than sockpuppets. In addition, the attempt to use a "technicality" in attempt to circumvent the spirit of[REDACTED] policies and guidelines is expressely a bad faith move, and regardless of the methods used, as long as the result is disruption, the practice is to be stopped. The page in question should be protected, and the principals involved should be brought to ANI for further investigation and possible reprimand. This cannot be tolerated. --]|]|] 18:45, 18 August 2007 (UTC) | |||
::] and ] hypothetically addresses the situation, but it is difficult to distinguish between legitimate reversions against improper edits by an edit-warrior (perfectly appropriate) and a cabal protecting a non-compliant page (problematic). The solution for someone opposing a cabal is ], and escalate quickly to RFC, which I've had success doing in getting some POV-pushing pages to come closer to Misplaced Pages policy. If you never revert, no one can accuse you of 3RR violations, though, of course, it also means that a group of POV-pushers will never be exposed for acting as such. ] 18:49, 18 August 2007 (UTC) | |||
::* Can't agree that any of these policies are effective as they are commonly enforced. Actually I see canvassing as the most effective tool to break a cabal's hold on an article, by recruiting fair-minded editors to the fray. However, that can be a double-edged sword. --] 19:59, 18 August 2007 (UTC) | |||
:Yes, this is quite common. Also, teams of editors with the same goal often impose changes through proposals, which are not even supposed to be a vote, and insert inaccuracies and heavy POV into the article. This is a huge problems on Misplaced Pages when it comes to controversial topics (and because of human stupidity and unnecessary rivalry more and more topics are becoming controversial) and not much is done about it for the sake of "consensus" (which is faked) and "]".<br>Here is a nice quote: | |||
{{cquote|Misplaced Pages is too often like the wild west, where the ability to shout the loudest, swing the hardest, and outlast the other fellow counts more than the quality and depth of one's sources.<small>.........................................]</small>|}} | |||
:--] 19:54, 18 August 2007 (UTC) | |||
Thanks for the answers. If I understand it right, there's some platform where a suspected "cabal" (I'd rather say: "a group of ppl with common interests and/or ideas abt a certain thing" even if it's longer a bit :) ) can be investigated. Misplaced Pages is too big, and there are more than one creationist user, and more than one evolutionist. And if they come together... :) But in a real world, it could happen, that for example for some reason, the creationst became the overwhelming majority, so all the evolutionist became banned for POV pushing, and all the related articles are became overwritten from a cr. POV. This is what I see in other themes, overwritten from a certain POV, and the...let me call ''balance'' of them is banned or retired because of them. (please, stick to my problem, don't start some religious war). | |||
However I had a second question about recruiting people for revert warring. Is it forbidden, or not? --] 22:27, 18 August 2007 (UTC) | |||
* As I see it you can advertise the issue at a common area such as the Well, or you can directly contact other editors who have worked on the specific article or a closely related article. Or you could advertise at the talk pages of closely related articles, and at the project page if the topic is within a project or related to a project.--] 23:07, 18 August 2007 (UTC) | |||
This is a problem, that I see many people face, just like I do. See the section below for a proposal to fix it.--] 09:41, 19 August 2007 (UTC) | |||
*The answer to this lies in page protection. ] 13:17, 20 August 2007 (UTC) | |||
I've got the same sort of Question. | |||
I added a link to the Queenstown (NZ) site which was critical over the amount of development. http://www.boston.com/travel/articles/2004/11/07/new_zealand_at_a_crossroads?pg=full | |||
It was unceremoniously deleted. The site itself is a brochure for Queenstown, there are so many people with a vested interest hotels, property developers, tour companies etc, etc... Almost impossible to say a bad thing, but Misplaced Pages isn't a glossy brochure to sell bacon slice apartments ???] 08:40, 21 August 2007 (UTC) | |||
== Two types of stubs == | |||
There are currently two types of stubs, those which are assessed as stubs, and those which deserve stub templates. The first is based on content, the second on size. This is an odd double-standard. I propose that we treat these as an '''either/or''' situation. There is no reason to limit the stub templates to size when the content is what needs expanding. At the least, Misplaced Pages should decide on one type of '''Stub''' so that the word will mean the same thing no matter where you see it. ~ ] 17:04, 19 August 2007 (UTC) | |||
*What's a ] isn't based strictly on size, though there are certainly size-related clauses in the (somewhat open-ended and discursive) "definition". What's a "stub-class article" is... left entirely to the imagination, as far as I know. When these first started appearing, the distinction was justified by at least one WP1.0er on the basis that they "weren't necessarily the same", without a clear-cut definition or distinction being advanced. I suspect most people ''treat'' them as being the same, and the huge number of "automatic assessments" obviously assume that "stub" implies "stub class article" (whether or not the reverse is also true). Personally, I'd entirely abolish "stub-class article" categories, on the basis of being unnecessary, confusing, and creating just this sort of definitional headache. (i.e. essentially merge the "stub" and "start" classes, with distinction between them being left to whether or not there's also a stub template.) But I strongly suspect I'm on a loser on that one. People seem to ''like'' having tremendously fine-grained "assessment categories" -- despite the original rationale for these (inclusion in or exclusion from WP1.0) necessarily having a distinctly boolean character. Failing that, we should probably do as Johnny suggests, and treat the two as being same, and enjoin people to "please make them consistent, one way or the other!". (Though I'm still dubious that's a job for a bot, since if the two are currently inconsistent, there's no way in principle to know which to resolve it in favour of. It'd be possible to do this in a db-query-assisted semi-automated way, though.) ] 18:41, 19 August 2007 (UTC) | |||
*There's been quite a lot of comment in the past at ] about this problem. Having the assessment-style templates called "Stub-Class" was a silly mistake from the beginning, since the stub system had already been in place for a considerable time, and there was bound to be confusion resulting from it. Alai's suggestion of amalgamating the terms Stub-Class and Start-Class into a single Start-Class would get around this, or alternatively simply rename Stub-Class to something less confusing. This would not overly affect the assessment system, and would make it easier for WP:WSS, which is often faced with comments from editors confused about the two systems. There are good reasons for the need of two different types of assessment of stubs, though, so I'm less in favour of Johnny's suggestion of making them identical. the Stub system assesses articles in general for expansion by all editors, whereas the Stub-Class assessment is dedicated to individual wikiprojects; as such, it is likely that there'd be a more rigorous assignment of exactly what constitutes a Stub-Class article. This would create a systemic bias, in that articles connected with specific WikiProject subjects would have a different assessment criterion from those with no dedicated WikiProject. So, overall, either renaming Stub-Class to something less confusing, or combining Stub-Class and Star-Class, would be my ideal preference. ]...''<small><font color="#008822">]</font></small>'' 00:31, 20 August 2007 (UTC) | |||
**I think that the difference between stub-class and start-class is relevant and important. I would support renaming stub-class, but to what? "Seed-class"? Or, rename stub-class to start-class and rename start-class to something else indicating progress, but that would be a fair amount of upheaval. ](]) 00:46, 20 August 2007 (UTC) | |||
***I'm not saying there's no difference between "stub-class" (assuming that's something like a "stub", as current practice would strongly imply) and "start"; OTOH, it does seem likely that it doesn't map in any way to prospective inclusion or exclusion from WP1.0, so I don't follow why it's important to, or relevant to ''that'' (and to what else it might be, remains a mystery). I do think it's pointless to differentiate between them ''twice'', as at present, with the consistency issues that introduces. | |||
***The categories are template-populated, so if the Stub-Class Articles were to be renamed (which would seem a rather half-hearted measure, if it fails to clear up the alleged distinction between those and stubs per se) it wouldn't be a ludicrous number of total edits, and it'd be reasonably automatable. ] 03:46, 20 August 2007 (UTC) | |||
****What about this - keeping '''Stub''' class, but as synonymous with those under ]. <s>This would be purely based on size, and these exceptionally short articles should be grouped together.</s> Then change current '''Stub''' to '''Start''', and current '''Start''' to something that reflects the fact that it is the foundation of a good article. Perhaps '''Basic'''-class, or the (slightly lengthy) '''Foundation-class'''? I'm okay with upheaval if we can settle a long-standing point of confusion. ~ ] 03:56, 20 August 2007 (UTC) | |||
*****At the risk of reiteration: ]s are ''not'' defined purely by size. Nor does ] advance any definition of its own, other than that in that guideline. ] 04:16, 20 August 2007 (UTC) | |||
******"My bad", as it were. Striking that part, how does the rest read? ~ ] 05:21, 25 August 2007 (UTC) | |||
== Deletion and merging violate the GFDL in some cases == | |||
As was brought up during the BJAODN discussion, deletion of pages occasionally violates the ], in particular section 4.I. | |||
Consider the following scenario: | |||
1. Article X is created. | |||
2. Content from article X is merged into article Y, with proper attribute as described by ] in order to comply with 4.I. | |||
3. Article X is deleted. | |||
4. Article Y now violates the GFDL, since the history required by section 4.I. is no longer accessible. | |||
Since GFDL compliance is a Foundation issue, don't deletion, merging, or both need to be changed to bring Misplaced Pages back into GFDL compliance? ] 20:16, 19 August 2007 (UTC) | |||
:As a minor point: the full history isn't used on http://static.wikipedia.org, just the list of contributors. | |||
:To address your real question: There is an explanation for administrators ] about how to deal with a "merge and delete" RfD outcome, but it's not as well known as it should be. It would be better practice, however, if a list of contributors was copied to the destination article whenever a merge is carried out. — Carl <small>(] · ])</small> 20:36, 19 August 2007 (UTC) | |||
::Right, but there's not much an administrator can do if the result of the discussion is delete (not merge), but an editor had independently merged content in the past. | |||
::Would it be possible to add, in addition to history, a "contributor history" section to every article, containing only the information required by the GFDL, eg title, contributors, etc, and which is permanent, and untouched by admin deletion? This solution seems clean in that it does not rely on extra effort on the part of the administrator or the editor merging, above what is required of them now. ] 21:07, 19 August 2007 (UTC) | |||
:::Yes, that would be technically possible, but the history needs to be with the merged content, not in the original article. And that means we need to educate people about how to merge content correctly (by copying a contributors list to the talk page at the same time). — Carl <small>(] · ])</small> 00:52, 20 August 2007 (UTC) | |||
::::How would one do that? I don't think ] says anything about copying contributor lists; is there a way to extract the contributors & dates from a page's history? ] 01:43, 20 August 2007 (UTC) | |||
:::::That's a possibility, but it's less complicated to do one of the following: | |||
:::::#Don't merge and delete; merge and redirect, and mention in the article history of the main article "Merged from XXX". As far as I'm aware, this is common practice, or should be. That way, the history section of the merged article is still accessible, as it should be. | |||
:::::#If the merged article should absolutely be deleted, then do a history merge. (See ] for more information about history merges.) | |||
:::::]<sup>]</sup> <span title="Misplaced Pages:Village pump (policy)">§</span> 03:30, 20 August 2007 (UTC) | |||
::::::But that's again if the entire page was merged, right?. What if one paragraph gets moved, then one month later the original page is AfDed and deleted, with nobody remembering that a paragraph had been moved. The article into which the paragraph was moved will still have the "merged from article X" note in the edit history, but since article X is now deleted that's not enough to satisfy the GFDL. ] 04:19, 20 August 2007 (UTC) | |||
:::::::In that case, the best course of action would be to undelete article X and make it into a redirect. If article X was merged into more than one article, the best solution I can think of is to make a trivial (but non-null) edit listing non-IP-based contributors to article X. Or lament that there weren't better free-content licenses when Misplaced Pages was started... ]<sup>]</sup> <span title="Misplaced Pages:Village pump (policy)">§</span> 04:38, 20 August 2007 (UTC) | |||
:I saw this too, and it has me worried. But my first thought is: This assumes that each page on Misplaced Pages is a separate "document" for purposes of the GFDL. I had always assumed that Misplaced Pages itself was the "document". The whole website, the database behind it, the project as a whole. If Misplaced Pages was a book, then each page would be, well, a page in that book. Is there anything anywhere that says otherwise? —<small>] (]|])</small> 17:11, 23 August 2007 (UTC) | |||
== New York City Subway station naming convention proposal == | |||
Over at ], we have been trying to reach a consensus on how ] stations should be named, because the subway system uses various names and punctuation formats for its stations, and users have "move-warred" articles in the absence of an agreed-upon guideline. The proposed convention is at ]. '']''<sup>]</sup> 08:22, 20 August 2007 (UTC) | |||
== Looking for a specific policy/notation == | |||
I posted my question ] but I'm not sure that was the right place. ] suggested I ask here. I fully understand the ''reasons'', but I'd like to know if there's a specific page/guideline/policy that covers users adding photobucket.com links to articles (most often to the "external links" section, linking to pictures at photobucket of the subject). I know that the uploading, or using of those images is covered under copyvio policy, and I understand the theoretical reason for not allowing them to be in links, but I'd like to be able to cite a specific policy, if asked. Does one exist? Thanks in advance! <sup>]<font color="FF69B4">♥</font>]</sup> 11:16, 20 August 2007 (UTC) | |||
:I don't know if there is actual policy about it, but there are good reasons why not to do it. | |||
# If the picture is relevant to the article, why is it not in the article. | |||
# If the picture is not relevant to the article, why should it be an external link? | |||
# If the picture is relevant to the article, but would violate policy being in the article itself, it would also violate policy if it is hosted on photobucket. | |||
:Remember this is wikipedia, and there doesn't have to be policy for everything someone does. He can just do it because he thinks it will make[REDACTED] better. And if it doesn't then someone will come along and fix it. ] 14:18, 20 August 2007 (UTC) | |||
::Thanks Martijn, and yeah, I've said all those things, I just wondered if there was a specific policy I'd missed. This isn't in regards to any specific situation, just more my boundless curiosity and anticipating a need to explain it in the future. Thanks for the tips! <sup>]<font color="FF69B4">♥</font>]</sup> 14:46, 20 August 2007 (UTC) | |||
:Not all external links to photobucket.com are prohibited. There is no reason for a policy, guideline, manual of style to specifically says you should not link to "photobucket.com" since each such external link is handled situation by situation. As an aside, linking to photobucket.com is not use of an image from photobucket on Misplaced Pages as posted . -- <font face="Kristen ITC">''']''' <sup>''(])''</sup></font> 14:51, 20 August 2007 (UTC) | |||
::Jreferee, could you please clarify this: ''"As an aside, linking to photobucket.com is not use of an image from photobucket on Misplaced Pages as posted here"'' | |||
::Are you saying that was '''not'' a violation of the user to link to a photobucket.com image? The person's edit was <nowiki>http://i175.photobucket.com/albums/w156/kb8207/osceolahs.jpg</nowiki> clearly (although they tried to do it with HTML) the intent was to ''add a photobucket image to the article''. Even if it is not the actual image appearing in the article, are you saying that the link itself is allowed? As Marty wrote above, "If the picture is relevant to the article, but would violate policy being in the article itself, it would also violate policy if it is hosted on photobucket." - so that makes me think that linking to an image such as the above issue, would be the same thing? | |||
::Perhaps a word is missing there that's making me not understand you, but I'm a bit more confused now, because it seems as though you're telling me that it is just fine to create links to photobucket? Also, you say that the MoS specifically mentions Photobucket? I was unable to find this, so if you can point to that I'd really appreciate it! Thanks! <sup>]<font color="FF69B4">♥</font>]</sup> 06:58, 21 August 2007 (UTC) | |||
*As stated above, this is a matter of common sense. Since our policies/guidelines are descriptive, and to my knowledge this hasn't been a big issue in the past, nobody has bothered to codify it so far. But if that user is asking "is this against policy", he's asking the wrong question - the proper question is "does this improve the encyclopedia". As Martijn said, if the image is appropriate, we should host it; if it's not, it shouldn't be in the article anyway. ] 08:51, 21 August 2007 (UTC) | |||
== Conflict of Interest due to Wikia, Inc. == | |||
*'''Note:''' All discussion has been moved here. ] ]</sup> ]</sub> 17:44, 20 August 2007 (UTC) | |||
There is a at WP:COI/N (Conflict of Interest Noticeboard). One admin has consented to keeping it in the open there. Two non-admin users have attempted to hide it from general view. I assume that it is fair for me to revert the attempts to hide the material, at least until an administrator is the one who hides it. --] 16:39, 20 August 2007 (UTC) | |||
I heard that the person who is in charge of the Wikimedia Foundation's finances is the very same person who is in charge of the for-profit Wikia, Inc.'s finances. Is that true? --] 03:56, 19 August 2007 (UTC) | |||
:Good question. <font face="Verdana">]</font><sup>'']''</sup> 20:35, 19 August 2007 (UTC) | |||
::So what if it is? I certainly trust them to do a good job if they are, and I'm sure that the board (who is in charge of the person) knows about this considering the owners of Wikia are previous board members. (...and the Board isn't stupid). ''']''' '''<small>]</small>''' 23:01, 19 August 2007 (UTC) | |||
:::That's fine if you personally trust them, Cbrown1023, but you may want to look at the (no joke -- it's the same form number as the number found in your User name -- coincidence or irony?), especially what's said about Line 5a: '''A "conflict of interest" arises when a person in a position of authority over an organization, such as a director, officer, or manager, may benefit personally from a decision he or she could make.''' Note also Appendix A, starting at Page 25, which outlines a sample Conflict of Interest policy that a non-profit organization might adopt. Do you think that, as Appendix A suggests, either Jimmy Wales or Michael E. Davis have ever left the room during a Wikimedia Foundation board meeting, so that the other board members could discuss whether a conflict of interest was present for those two, who just happen to be former business partners and are currently vested in Wikia, which benefits from many, many favorable associations within Misplaced Pages? Jimmy Wales tried to hire a Misplaced Pages Arbitration Committee member onto Wikia. Wikia has many thousands of outbound links from Misplaced Pages, which point to pages monetized by Google AdSense ads. I guess, Cbrown1023, the question is not whether the Board "knows about this", but rather, why are they allowing such a gross appearance of conflict of interest to continue unabated? --] 03:27, 20 August 2007 (UTC) | |||
::::If you feel that the Wikimedia Foundation is doing something wrong, by all means file a complaint with them. Otherwise, please take this discussion elsewhere. This noticeboard isn't for solving legal problems. - ] <sup>]</sup> 03:43, 20 August 2007 (UTC) | |||
:::::This is not currently a legal problem. Nobody said it was. It is a Conflict of Interest problem. Another administrator has called it a "Good question", so why should it be swept under the rug and be "Resolved" by a non-administrator? --] 14:23, 20 August 2007 (UTC) | |||
Hi again Dude. A few clarifications: you posted to ask whether there's a conflict of interest but haven't supplied much information. Normally requests to this board cite specific activity and evidence. And normally there's an onsite edit history to reference. If this person actually has registered and edits in a way that reflects a conflict of interest, this noticeboard might be able to accomplish something. If the conflict of interest relationship doesn't extend to actual editing activity then I have no direct power and only a little influence. Yet as the founder of ] I'm particularly open to this type of request. Sure, why not investigate a Misplaced Pages/Wikia COI? Burden of evidence rests squarely on your shoulders. Go for it if it's particularly important to you. Just expect to shoulder most of the work yourself. I'll check it out, see if there's anything I can do about it, and possibly ask for broader input. That's as fair as I can be. <font face="Verdana">]</font><sup>'']''</sup> 15:19, 20 August 2007 (UTC) | |||
:Well, this is a wiki, so the burden of evidence isn't just on me -- it's on the other users who will hopefully see this thread and have enough "wikisleuthing" in their blood to check it out some more. I appreciate your support of it staying in the open, rather than being hastily "resolved", which really would have reflected poorly on the Foundation. For starters, people may wish to look at these discussions about the Wikia/Wikipedia conflict of interest: | |||
::* | |||
::* | |||
::* | |||
::* This one is important, as it shows that Davis has not paid $817,830 that he was judged to owe the plaintiff. We are simultaneously being asked to "trust" that Davis will do a good job with the books at both Wikimedia and Wikia, Inc..<small>—The preceding ] comment was added by ] (] • ]){{#if:16:00, August 20, 2007 (UTC)| 16:00, August 20, 2007 (UTC)}}.</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
::*<small>—The preceding ] comment was added by ] (] • ]){{#if:16:00, August 20, 2007 (UTC)| 16:00, August 20, 2007 (UTC)}}.</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
::* <small>—The preceding ] comment was added by ] (] • ]){{#if:17:07, August 20, 2007 (UTC)| 17:07, August 20, 2007 (UTC)}}.</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
:Again, I look forward to whether anyone else will step up and investigate this further. --] 15:48, 20 August 2007 (UTC) | |||
::(stepping over issues of whether this is the right page to talk about the subject)...indeed, board members and accountants both have ] duties to act in the best interest of their organizations. By various laws and governance principles they have to recuse themselves or avoid involvement when there is a conflict. Even a perceived conflict can be corrosive to governance and is sometimes prohibited because people lose faith. Someone who is on the board of Wikimedia or does its finances and also has a financial stake in Wikia should be very careful about taking positions here on things that benefit Wikia by directing traffic there, banning things from Misplaced Pages so as to distinguish it from a commercial site, making Misplaced Pages less attractive to constituents than Wikia. Actions that seem to raise a conflict include banning commercial links, advertisements, fair use media, conflict-of-interest editors, etc., from Misplaced Pages so that people go to Wikia for that.] 16:05, 20 August 2007 (UTC) | |||
(outdent) Looking over those five links, two of them are specifically legal issues outside my expertise. I have no qualification to evaluate them. Joe Szlilagyi's blog is hardly a reliable source and another on-wikipedia thread was started by someone who's expended his credibility also. The techcrunch.com article holds water, in my opinion. What exactly are you seeking? If the basic complaint regards financial relationships at that level, then the most I could do would be to ask the WMF board to review this matter, and possibly to ask someone to institute nofollow to outgoing links to Wikia. My sysop tools would be useless to address this. Or is more forthcoming? <font face="Verdana">]</font><sup>'']''</sup> 17:06, 20 August 2007 (UTC) | |||
:This is a wiki -- there's no telling if there is "more forthcoming" or not. Another example might be the Essjay situation. Essjay was nominated by Jimmy Wales to the Arbitration Committee -- the highest level of dispute resolution below the Board itself. Only a month earlier (I may be wrong about the timeline), Wales had also hired Essjay to work for Wikia, Inc. This took place this year, well after the issue of "Conflict of Interest" has been made so noticeable on Misplaced Pages, thanks in part (ironically) to Wales' discussions of editing by conflicted parties. Was it appropriate for Wales to nominate one of his Wikia employees to a position on the Arbitration Committee? I believe that question was obscured by the whole firestorm over Essjay's fabricated credentials. Yes, I think the Board of Directors should look at this entire matter; but do you realize that it should be while Wales and Davis and Beesley (and any other Wikia parties I may have missed) are not present in the room? The other factor that I think is important here is that this discussion remain open for some time. Already two non-admin users have attempted to hide it from plain view, with the reason being it belongs somewhere else. This seems very weak, being that this is a Conflict of Interest Noticeboard, and this is a conflict of interest issue. --] 17:15, 20 August 2007 (UTC) | |||
(outdent) To clarify for newcomers to this thread, we've agreed to refer discussion here from the other locations because this looks like the kind of issue best addressed by community input and (possibly) petition to the WMF board. <font face="Verdana">]</font><sup>'']''</sup> 17:31, 20 August 2007 (UTC) | |||
::I only have a few comments on all of this... first, I agree that the issue should not have been posted on the COI noticeboard... that is for EDITING with a COI, not conflicts of interests that have nothing to do with articles or editing them. Second, I am not sure what all this hooplah is about, and frankly I don't care. If there is an impropper COI at the exectutive level, I am sure that Jimbo's attorneys will notify Jimbo of it and suggest a change. It does not affect our project of building an encyclopedia, so why should we care? ] 19:50, 20 August 2007 (UTC) | |||
:::I wish even ten percent of the people who offer opinions about how ] ought to run actually pitched in to help run it. <font face="Verdana">]</font><sup>'']''</sup> 20:20, 20 August 2007 (UTC) | |||
::::A conflict of interest on the board of a nonprofit does potentially affect the nonprofit's projects. Jimbo's pronouncements have a quasi-policy effect here, and the board does vote on resolutions that affect what the encyclopedia looks like, how content is licensed and distributed, and how we go about our business generally. If a board member were to say "We do not X on Misplaced Pages, that is for other Wikis" (implying, Wikis where I might make some money from it) I can understand why people would be concerned. Without saying there is or is not a problem, it's certainly the prerogative of the stakeholders to discuss management issues, and a worthy subject of discussion. ] 21:31, 20 August 2007 (UTC) | |||
:::::But, again, what's the resolution? Misplaced Pages policy is that Jimmy Wales gets to override all the other policies at his whim, so there's always the hypothetical possibility of Wales running Misplaced Pages for his self-interest, and that's unavoidable unless you want to mirror the site and start over somewhere else and hope people follow you to the new site. In the absence of an actual policy proposal by Wikia that presents an actual conflict of interest adversely affecting the encyclopedia, this is all hypothetical. One could argue that the ] policy, which deletes not just libelous material, but all controversial material even if true, presents a conflict of interest, because it values Misplaced Pages assets threatened by lawsuit over the judgment of individual editors about how best to produce an encyclopedia by creating ironclad rules. That's not an argument against BLP, by the way, just against the extreme concerns about conflicts of interest presented here. ] 21:42, 20 August 2007 (UTC) | |||
::::::Fiduciary duties are a serious matter. Overriding the will of individual editors for the benefit of the project as a whole is one thing; not saying this is happening but overriding the editors as a group in favor of a board member's private interest is quite another. One step people could take, and the Board should certainly take, is to subject Jimbo's proclamations to more scrutiny and not adopt them all as a matter of course. If that means changing policy, policy can be changed. We have that power. We don't need to wait for a new, or actual, or proven, conflict to arise before considering the matter. As a technical matter, Wikimedia is ''not'' a membership organization so the actual relation between editors, bureaucrats, administrators, the Foundation, and the public is rather complex. Practically, I doubt anyone is going to do anything unless there's a melt-down of some sort. But nothing wrong with discussing. For an interesting parallel (but a very different organization and context) it's interesting to look at the relationship between ] (a for-profit that runs the website) and the Craigslist Foundation (a nonprofit that gives away all the profits). They had to separate over conflict of interest issues, but Craig is still on the Board of both. ] 23:04, 20 August 2007 (UTC) | |||
:::::::The community has overruled Jimbo on occasion and if a sufficient number of community members raised this issue with the board it would probably have an effect. <font face="Verdana">]</font><sup>'']''</sup> 23:48, 20 August 2007 (UTC) | |||
(unindent) I am sort of confused. Yes, Wikia and Misplaced Pages share a number of people. Yes, there are some aspects of cozy relationship. That is public information. | |||
If the accusation is that there's a potential COI, then yes, but everyone's aware of it, from the Board to individual admins and editors who bother to pay attention. It's possible we'd all miss some sort of actual conflict or improper behavior, but I haven't seen any. | |||
If you're suggesting such is going on, then please provide us some more specific proof. | |||
If you're worried about it, ask board members if they can let you know what they're doing to review potential conflicts of interest. ] 19:20, 21 August 2007 (UTC) | |||
:Georgewilliamherbert, did you see when Jimmy Wales used Misplaced Pages as a talent pool to hire an admin named Essjay onto the Wikia, Inc. staff? Then about a month later Wales appointed the same Essjay to the Arbitration Committee on Misplaced Pages. If the Board was aware of COI, shouldn't Wales be working on '''reducing''' the number of Wikia staff members who infiltrate the highest positions of authority on Misplaced Pages, rather than '''increasing''' the count by one more person? Also, did you notice when Jimmy Wales overruled community consensus and decided that "nofollow" tags should be added to all outgoing links -- but that many of the inter-wiki links to Wikia, Inc. sites were not subject to this decree? Those are actual conflicts or improper behavior. Aren't they? --] 00:09, 23 August 2007 (UTC) | |||
== Banning policy on proxying--banned users can censor? == | |||
Currently the ] states: | |||
:''Wikipedians are not permitted to post or edit material at the direction of a banned user, an activity sometimes called "proxying."'' | |||
If a banned user asks to have something included, does that really mean that all of the sudden everyone is forbidden from including it? If that were the case, a banned user could effectively censor just by asking to have the material he or she wishes to censor included. I'm sure that can not be the intent of the policy. Can we rephrase this so that it doesn't allow banned users to censor new material? | |||
Please respond at ]. Thank you. ←] 17:51, 20 August 2007 (UTC) | |||
:I don't see a problem in the wording, and no it doesn't mean that. If an editor decides in good faith to follow a banned user's suggestions that are made openly on a talk page, then he is not "at the direction" of that user and can do as he chooses. ] 18:41, 20 August 2007 (UTC) | |||
::(edit conflicted with the above) I don't see that as a problem. It just means we don't post on behalf of the banned editor. If the proposed edit has merit then someone else will probably make a similar edit completely independently, which is fine. <font face="Verdana">]</font><sup>'']''</sup> 18:43, 20 August 2007 (UTC) | |||
How are administrators expected to discern between independent and directed inclusion? How is someone who has decided to include material which a banned user suggested supposed to defend themselves from accusations of proxying? Wouldn't anyone proxying likely claim that they are acting independently? ←] 18:50, 20 August 2007 (UTC) | |||
:Good questions. I think that if I was going to add something that a banned user suggested, I'd say so on the talk page and explain my reasons. It would be up to others to AGF. I wouldn't think you could formalize a procedure for this, but you also can't allow concepts to be censored just because a banned user proposes them. Admins will have to be flexible, as usual. ] 19:04, 20 August 2007 (UTC) | |||
::Yes, provided that the editor adding the content is able to confirm that they believe it valid (per WP:V etc, of course) the ] dictates that we believe the editor concerned. All we require is the belief that the content is not being added purely on the basis that it has been promoted by a banned user. ] 19:53, 20 August 2007 (UTC) | |||
:::Directed inclusions are usually pretty obvious because they reproduce the same problems that led to the editor's ban. They're mostly cut-and-paste jobs. If anyone really agreed with these people and cared enough, they'd research independently and put citations and statements into their own words, which would be fine. <font face="Verdana">]</font><sup>'']''</sup> 20:19, 20 August 2007 (UTC) | |||
::::The only premise I can think of is if the banned editor was blocked regarding their conduct (or similar), rather than contributions. If, owing to overzealous interpretation of the rules, otherwise good edits were removed simply because they were the contributions of a banned user then it may be permissable for someone to reintroduce them - citing that the edits had consensus for inclusion prior to the ban of the editor concerned. For this the question of whether it is being done at the behest of the banned editor is irrelevant, the edits are under a different name and therefore the banned editor is not credited. In reality, good edits will always return (without prompting) since the good sources remain. Bad edits will not survive (despite prompting). ] 20:46, 20 August 2007 (UTC) | |||
I have proposed changing that sentence to: | |||
:''Wikipedians are not permitted to post or edit material at the direction of a banned user, an activity sometimes called "proxying," unless they are able to confirm that the changes are verifiable and have independent reasons for making them.'' | |||
Is that better? ←] 19:14, 21 August 2007 (UTC) | |||
:No clarification needed. "At the direction of a banned user" is clear enough; it doesn't include making changes that a banned editor wants if the acting editor is doing it independently. ]]<sup>]</sup> 15:25, 22 August 2007 (UTC) | |||
== what a source is?! == | |||
good example :</br>i put a trivia on ] article, the movie, and some guy, ], undid it : cause : "unsourced" - - - sometimes, when i'm on wiki, i'm really asking myself about intelligence, in general - - -</br>trivia text : "<i>The actress Kate Nelligan, who plays Tom and Savannah's mother, Lila, was born on March 16th, 1950, and was older than her twins in the movie, Tom (played by Nick Nolte, born on February 8th, 1941) and Savannah (Melinda Dillon, October 13th, 1939)</i>" : what more do we need, more than the birtdates?! ri-di-cu-lous, huh! ] 07:18, 21 August 2007 (UTC) | |||
*The problem is that you're just posting your own observation. Even though it's trivial to do the math, it's trivial to do any number of comparisons--that's how people come up with all those wacky numerological coincidences. The question is 'why is this observation noteworthy?' The answer is to find a secondary source that has taken note of it, and its reliability is a sign of the amount of value to give this observation. If it was noted in a review, cite that, for example. ] (]/]) 07:38, 21 August 2007 (UTC) | |||
*Yes, the information makes sense, but you need to state where you found that information (ie. its source). Did you read it somewhere? Did you see it on TV? Everything in a good encyclopaedia must be backed up be a reference. ] 07:49, 21 August 2007 (UTC) | |||
**sometimes, too often, people on wiki are really... how to put it politically correctly?! sorry, there is no other word than ''**ity'' : who needs sources to read an official birtdate? AND you do not know what a numerological coincidences is!!! what i put is NOT a numerological coincidences but JUST the FACT that an actress was older than other actors playing her son+daughter in a movie, that's all ] 08:16, 21 August 2007 (UTC) | |||
***But, ]. As an example, you don't need to know ] was parodied as ] to know that Pikachu is a yellow mouse-like fictional creature that can use electricity. It's loading a horse-drawn cart with the equivalent of a semi cab - useless. -<font color="008000">'']''</font> <small><sup>(<font color="0000FF">]</font> <font color="FF7F50">]</font>)</sup></small> 08:51, 21 August 2007 (UTC) | |||
***Consolidating discussion from my talk page... | |||
<blockquote> | |||
:::bravo, i never read something this *** about what a source is... in this case, the point is just to KNOW the bithdate: no one needs a source!!!! '''OR :''' you have to put in wiki all sources for all mentionned birthdates, good luck! ] 08:46, 21 August 2007 (UTC) | |||
</blockquote> | |||
:::-] 08:58, 21 August 2007 (UTC) | |||
::*Actually, yes all dates must be sourced on Misplaced Pages. This is an encyclopaedia and must contain ] information only. It is ]. Any unreferenced material will be removed for that reason. Are you saying it is OK for people to make dates up? ] 08:58, 21 August 2007 (UTC) | |||
This whole discussion is very silly. First, it should be clarified that 84.227.48.33 means that the actress playing the mother was nine years ''younger'' than the actors playing her children. This is certainly an interesting fact, and Night Gyr's question implies that he didn't fully understand what 84.227.48.33 was talking about. | |||
Second, Papa November is mostly wrong. Birth dates are rarely referenced (though that doesn't mean they shouldn't be), should not be removed for being unreferenced (though checking them and referencing them would be a splendid way to improve the encyclopedia), and, in my opinion most importantly, his last question about making up dates implies that anything unreferenced is made up or that allowing anything without a reference behind it is supporting factually inaccurate information. This is an enormous assumption of bad faith and simply not a logical conclusion. ] 20:44, 21 August 2007 (UTC) | |||
:Yes, it's an interesting trivial fact. But is there a source that recognises this observation? Is there a source out there that says "oh look, the actress playing the mother was nine years younger than the actors playing her children!" ] 01:07, 22 August 2007 (UTC) | |||
::Such a source is unnecessary if there are sources for their birth dates. This is an obvious factual observation, like finding the population density of a region for which you have the population and the area. ] 05:18, 22 August 2007 (UTC) | |||
:::yes, but to demonstrate that this is more than a minor factoid you require secondary sourcing. This is kinda like WP:N on a small scale and IMHO is a good thing. It prevents articles degenerating into trivia lists. '''<font color="red">]</font><font color="green">]</font><font color="blue">]</font><font color="orange">]</font>''' 14:57, 22 August 2007 (UTC) | |||
::::Even if this were backed up by a source, it doesn't have to be included if it's too unimportant or not relevant. Now, if this were a part of the ''critical reception'' of the film, that would be a different thing. ]]<sup>]</sup> 15:19, 22 August 2007 (UTC) | |||
::::Since the article has a trivia section its a little to late for that pressing concern. Even still, I would certainly suggest that this be included in a well-written cast section were the article more mature. ] 18:45, 22 August 2007 (UTC) | |||
== OMG too many essays == | |||
As some of you may have heard before, our ] is a complete and total mess, because for years people have dumped anything they wanted to say in there. In an effort to clean this up, I would suggest the following, preferably using a bot: | |||
#Find all essays that lack sufficient outside participation, feedback, or incoming links | |||
#Since presumably few people care about these, move them into the userspace of their author | |||
#Remove all essays in userspace from this category (by removing {{tl|essay}}) | |||
ThoughtS? ] 12:42, 21 August 2007 (UTC) | |||
: It seems like a very good first step. Thanks! --] 13:16, 21 August 2007 (UTC) | |||
:May I ask what is the problem with essays in userspace? Many people (myself included) actually prefer writing essays in their "own" space, so as they're not ''that'' mercilessly edited and always reflect the author's intentions. I would strongly '''oppose''' a plain removal. An acceptable middle ground would be to create a {{tl|useressay}} template that categorizes pages to a '''sub'''category of ], so that they do not show up there, but are nevertheless accessible. ]] 14:38, 21 August 2007 (UTC) | |||
::There is no problem with essays in userspace, and nobody is saying that there is. Note that most people do not in fact bother with "tags" in their userspace, so there are at an approximation ten times as many essays in userspace as we know of. I have no objection to {{tl|useressay}}. ] 14:47, 21 August 2007 (UTC) | |||
:::I'm not convinced that segregating userspace categories is the solution. Suggest massive MFD of low quality essays (in whatever namespace) that either duplicate the content of better essays or whose primary contributor has been inactive for 3-4 months or more. Would prefer to slant this toward older essays which never gained many incoming links and don't get updated. <font face="Verdana">]</font><sup>'']''</sup> 14:52, 21 August 2007 (UTC) | |||
:::*I have no objection. Please pick any five from ] you like; nearly all of them are in fact low quality. However, I suspect that people will miss the point and go "keep, it's an essay". ] 14:56, 21 August 2007 (UTC) | |||
::::*Do we really need that? An essay is an essay, distincion is already made at the "publicity level" they have, that is, essays that are "respected by the community" even get to be linked from policies and guidelines, while other only sit in the category. currently we have two sources of possible revert-wars: link or not from a given policy/guideline, and stay at wikpedia: or user: namespace; doing this we'll get another source: is it an essay or not. Do we need to add yet another layer of policy-like categorization? (there's a somewhat related discussion at ] - ] 15:11, 21 August 2007 (UTC) | |||
:::::Here's one possible option; not earth-shaking, but available. You can use the category ], if you want, to draw further distinctions to identify some of those essays which should be used especially as guidelines or suggestions on editing Misplaced Pages entries. --] | |||
::::::I agree with the idea of moving personal essays back into userspace, but it shouldn't be done by bot: each deserves to be evaluated by a human -- using pre-determined criteria (e.g., no outside participation) but personal judgement as well. There are only about 500 essays in the Misplaced Pages namespace; this task can be done by hand.--] 17:06, 21 August 2007 (UTC) | |||
::::::Oh, and in the cases where users object to the move, go to RFC or Requested Moves to see which of the moves are endorsed.--] 17:15, 21 August 2007 (UTC) | |||
I support userfying the obviously userfiable essays and subcategorizing the category to make it easier to navigate. Deleting essays outright seems illogical, both because there's really nothing to gain over userfication, and it'd be a hassle to deal with the MfDs. ] 20:47, 21 August 2007 (UTC) | |||
*Okay, let's start this simple. I suggest that '''every essay that has only be edited by a single user''' (not counting typo fixes, adding the essay tag or a category, nominating it for deletion, or similar minor stuff) should be moved to that user's userspace. The only reason I suggested a bot is because it's a lot of work; if some people chime in to help, we can do it by hand. ] 08:22, 22 August 2007 (UTC) | |||
:That sounds like a perfectly good start. I'd additionally exclude essays less than a month old, to give them a chance to grow. I'm not volunteering, though, I have salmon on the grill.--] 18:48, 22 August 2007 (UTC) | |||
:Sounds good to me. ] ] 19:04, 22 August 2007 (UTC) | |||
::'''Strong recommendation:''' I'd like to add one recommendation to this sensible proposal. To the extent that "human eyeballs" get involved, it would be great if people could identify essays that touch upon the same or similar issue, and flag those as merge candidates with each other (regardless of whether they get moved to User space). | |||
::This will help reduce redundancy, and inform essay authors that their ideas have been addressed elsewhere, possibly encouraging further collaboration and conservation of effort. ] 19:11, 22 August 2007 (UTC) | |||
::*Sounds reasonable. ] 07:42, 23 August 2007 (UTC) | |||
* I agree with Radiant's proposal, but suggest that we delete essays substantially the work of one person, from contributors who have not contributed to WP for at least 6 months, and only include in the contributing editor count those editors who have contributed to WP in the last 6 months. --] 19:41, 22 August 2007 (UTC) | |||
::Why is that necessary in contrast with just moving it to their user space?--] 20:35, 22 August 2007 (UTC) | |||
::*Kevin - to keep things simple, I suggest we move them to userspace as suggested above, and let people who wish them deleted invoke ] to do so. ] 07:42, 23 August 2007 (UTC) | |||
:::* I have no strong feeling on the matter other than to support a good plan which moves us closer to your goal. --] 14:50, 24 August 2007 (UTC) | |||
Perhaps someone should write an essay about this. ] seems to be available. - ]</small> (]) 15:56, 24 August 2007 (UTC) | |||
WP isn't a free webhost. If you're going to write non-collaborative documents that no one else cares about, please do it someplace else. I support MFDing low interest essays, or upmerging them. It would be easiest enough to make lists of such pages with the fewest editors and incoming links. --] 16:09, 24 August 2007 (UTC) | |||
== We must have a policy for the company lists == | |||
I have been trying to discuss this for quite sometime now at different places, without much success. But, the issue still remains at large - the company list articles seem to be quite wild. Most are either useless or powerful spam magnets, some are way too long with some more promising to become so, and all are growing without the slightest notion of guiding principles. For details please check the discussion ]. <font color="deeppink">]</font><sup>(] • ])</sup> 19:11, 21 August 2007 (UTC) | |||
: I would bring this up at the talk page for ] which is the notability guidline for companies and organizations. I would support inclusion of your concept there. --] 19:38, 22 August 2007 (UTC) | |||
:'''No''' -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). ] (]) 06:17, 2 January 2025 (UTC) | |||
== Weights & Measures == | |||
*'''Comment''' I have no opinion on this matter, however, note that we are currently dealing with a ] and there's a generalized state of confusion in how to address it. ] (]) 08:54, 2 January 2025 (UTC) | |||
*'''Yes''' I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. ] (]) 14:34, 2 January 2025 (UTC) | |||
* '''Yes, mention''' that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. ] (]) 14:43, 2 January 2025 (UTC) | |||
*:I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". ] (]) 07:57, 3 January 2025 (UTC) | |||
: '''No.''' When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. ] (]) 17:29, 2 January 2025 (UTC) | |||
* '''Comment''' LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Misplaced Pages. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages. I would indef such users for lacking ]. ] (]) 17:39, 2 January 2025 (UTC) | |||
*:That guideline states "Sanctions such as blocks and bans are always considered a ''last resort'' where all other avenues of correcting problems have been tried and have failed." ] (]) 19:44, 2 January 2025 (UTC) | |||
*:: ] isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in ]. ] (]) 20:49, 2 January 2025 (UTC) | |||
*:::I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. ] (]) 20:56, 2 January 2025 (UTC) | |||
*:{{tq|... but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages|q=yes}}: That problem existed with some humans even prior to LLMs. —] (]) 02:53, 20 January 2025 (UTC) | |||
*'''No''' - Not a good or bad faith issue. ] (]) 21:02, 2 January 2025 (UTC) | |||
*'''Yes''' Using a 3rd party service to contribute to the Misplaced Pages on your behalf is clearly bad-faith, analogous to paying someone to write your article. ] (]) 14:39, 3 January 2025 (UTC) | |||
*:Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. ] (]) 16:55, 3 January 2025 (UTC) | |||
*::That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are ]. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 23:06, 9 January 2025 (UTC) | |||
*'''Comment''' Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. ] (]) 22:42, 3 January 2025 (UTC) | |||
*'''No''' – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. ] (]) 05:04, 5 January 2025 (UTC) | |||
*:There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions. | |||
*:We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..." | |||
*:The end result is that it's "completely banned" ...except for an apparent majority of uses. ] (]) 06:34, 5 January 2025 (UTC) | |||
*::Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? ] (]) 06:08, 7 January 2025 (UTC) | |||
*:::Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Misplaced Pages values. ] (]) 15:19, 8 January 2025 (UTC) | |||
*'''No''' The OP seems to misunderstand ] which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per ]. ]🐉(]) 23:11, 5 January 2025 (UTC) | |||
*'''No'''. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. <span class="nowrap">—] (] | ])</span> 05:56, 8 January 2025 (UTC) | |||
*'''No''', this is not about good faith. ] (]) 11:14, 9 January 2025 (UTC) | |||
*'''Yes'''. AI use is ''not'' a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the ] section is about. | |||
:It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point ''away'' from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 22:56, 9 January 2025 (UTC) | |||
::Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that {{tq|AI use is ''not'' a demonstration of bad faith... but it is equally not a "demonstration of good faith"}}, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. ] (]) 04:40, 13 January 2025 (UTC) | |||
*'''Yes'''. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own ''anywhere'' is inherently bad-faith and one doesn't need to know Wiki policies to understand that. ] (]) 23:30, 9 January 2025 (UTC) | |||
*'''Yes'''. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a ] issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. ]<span style="color: #3558b7;"><sup>]</sup>]</span> 01:26, 10 January 2025 (UTC) | |||
*:Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. ] (]) 04:40, 13 January 2025 (UTC) | |||
*'''No''' - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --] (]) 01:31, 10 January 2025 (UTC) | |||
*'''No''' - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. ] (]) 11:24, 13 January 2025 (UTC) | |||
::To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but {{tq|using AI}} should be thrown into the same cross-hairs as completely AI generated comments. ] (]) 11:35, 13 January 2025 (UTC) | |||
:::@] You mean ''shouldn't'' be thrown? I think that would make more sense given the context of your original !vote. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:08, 14 January 2025 (UTC) | |||
*'''No'''. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:43, 13 January 2025 (UTC) | |||
* {{Collapse top}} I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others. | |||
When I log here, I'm presented with a page showing the various languages Misplaced Pages is available in. The language I choose is English, since I live] 19:57, 21 August 2007 (UTC) in the USA. The system used in the USA for weights and measures differs from the metric system. I feel that references citing weights and measures in the English language section of Misplaced Pages should at least contain the system used in the USA. | |||
I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith. | |||
:See ] for an explanation of use of units. Metric is preferred, except in those articles which are specific to the US, or in those fields where other units are typical (e.g. aviation). It is usually helpful to have a parenthetic conversion of weights and measures into the other system where appropriate. ] ]</sup> ]</sub> 20:05, 21 August 2007 (UTC) | |||
When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them. | |||
::What is wrong with gradually learning a system of measurement that your own country has been trying to adopt for some time, and catch up with the rest of the world? The imperial system of measurement is an antiquated and irregular system of measurement that has been the cause of substantial disadvantage in the U.S. in the fields of military, aerospace, international trade and commerce <see metric system>. The sooner you become familiar with the world's current system of measurement, the sooner you will be able to orient yourself with the ''World'' Wide Web. Yes, the internet does extend beyond your country. ] 13:48, 22 August 2007 (UTC) | |||
It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it. | |||
:::When sources give measures in a particular system, that system should be given first. For example, dimensions for US lighthouses are given (by our sources) in feet. Using meters in this case is imprecise; they can be included in the article, but they are derived values and should be presented as such. (Ditto for the other direction.) It's not a matter for parochialism, American or Australian or otherwise. It's simply a matter of accurate presentation. ] 13:59, 22 August 2007 (UTC) | |||
Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. {{Collapse bottom}} It's easy to detect for most, can be pointed out as needed. '''No''' need to add an extra policy ''']]''' | |||
::That is true, though I suppose the original point was that the user wanted the units to match the local system settings. Whilst it is an understandable and desirable default for the user, I decided to be a bit opportunistic and poke fun at the ethnocentric viewpoint expressed. Where there is an issue with the accuracy of the presentation, or for proper names (i.e. Ninety Mile Beach) it doesnt make sense to use the metric, so I would definitely agree with you. But in the case of other quoted measures used around the world it would be better that the metric unit equivalent is available. ] 14:20, 22 August 2007 (UTC) | |||
{{Collapse bottom}} | |||
== Allowing non-admin "delete" closures at RfD == | |||
== Wikiproject guidelines and ] == | |||
At ], a few editors ({{u|Enos733}} and {{u|Jay}}, while {{u|Robert McClenon}} and {{u|OwenX}} hinted at it) expressed support for allowing non-administrators to close RfD discussions as "delete". While I don't personally hold strong opinions in this regard, I would like for this idea to be discussed here. ]<sub>]<sub>]</sub></sub> (]/]) 13:13, 7 January 2025 (UTC) | |||
I proposed in ] about wikiprojects roles in borderline BIO cases, which has mainly to do with minor league players but it could be expanded as such. ], the last section. Thanks ] ] 20:13, 21 August 2007 (UTC) | |||
*] --] <sup>(])</sup> 14:10, 7 January 2025 (UTC) | |||
:To me, its time we create a new set of guidelines addressing the notability of all athletes (think ]). We should go sport by sport and set notability guidelines for each one. '''<font face="Comic Sans MS">]</font>''' 21:32, 21 August 2007 (UTC) | |||
*While I have no issue with the direction the linked discussion has taken, I agree with almost every contributor there: As a practice I have zero interest in generally allowing random editors closing outside their permissions. It might make DRV a more chatty board, granted. ] (]) 15:02, 7 January 2025 (UTC) | |||
::I would second that idea. <span id="{{{User|Acdixon}}}" class="plainlinks" style="color:#002bb8">] <sup>(] <small>•</small> ] <small>•</small> )</span></sup> 19:41, 22 August 2007 (UTC) | |||
*:Tamzin makes a reasonable case in their comment below. When we have already chosen to trust certain editors with advanced permissions, we might allow those folks to utilize them as fully as accepted practice allows. Those humans already have skin in the game. They are unlikely to act rashly. ] (]) 19:32, 7 January 2025 (UTC) | |||
* To me, non-admin delete closes at any XfD have always seemed inconsistent with what we say about how adminship and discussion closing work. I would be in violation of admin policy if I deleted based on someone else's close without conducting a full review myself, in which case, what was the point of their close? It's entirely redundant to my own work. That said, I can't really articulate a reason that this should be allowed at some XfDs but not others, and it seems to have gone fine at CfD and TfD. I guess call me neutral. {{PB}} What I'd be more open to is allowing page movers to do this. Page movers do have the tools to turn a bluelink red, so it doesn't create the same admin accountability issue if I'm just cleaning up the stray page left over from a page mover's use of a tool that they were duly granted and subject to their own accountability rules for. We could let them move a redirect to some other plausible title (this would violate ] as currently written but I think I'd be okay with making this a canonical exception), and/or allow moving to some draftspace or userspace page and tagging for G6, as we do with {{tl|db-moved}}. I'll note that when I was a non-admin pagemover, I did close a few things as delete where some edge case applied that let me effect the deletion using only suppressredirect, and no one ever objected. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 19:07, 7 January 2025 (UTC) | |||
*::I see that I was sort of vague, which is consistent with the statement that I hinted at allowing non-admin delete closures. My main concern is that I would like to see our guidelines and our practice made consistent, either by changing the guidelines or changing the practice. It appears that there is a rough consensus emerging that non-admin delete closures should continue to be disallowed in RFD, but that CFD may be a special case. So what I am saying is that if, in practice, we allow non-admin Delete closures at CFD, the guideline should say something vague to that effect. | |||
*::I also see that there is a consensus that DRV can endorse irregular non-admin closures, including irregular non-admin Delete closures. Specifically, it isn't necessary for DRV to vacate the closure for an ] admin to close. A consensus at DRV, some of whose editors will be uninvolved admins, is at least as good a close as a normal close by an uninvolved admin. | |||
*::Also, maybe we need clearer guidance about non-admin Keep closures of AFDs. I think that if an editor is not sure whether they have sufficient experience to be closing AFDs as Keep, they don't have enough experience. I think that the guidance is clear enough in saying that ] applies to non-admin closes, but maybe it needs to be further strengthened, because at DRV we sometimes deal with non-admin closes where the closer doesn't respond to inquiries, or is rude in response to them. | |||
*::Also, maybe we need clearer guidance about non-admin No Consensus closures of AFDs. In particular, a close of No Consensus is a contentious closure, and should either be left to an admin, or should be Relisted. | |||
::] (]) 19:20, 7 January 2025 (UTC) | |||
:::As for {{tq| I can't really articulate a reason that this should be allowed at some XfDs}}, the argument is that more work is needed to enact closures at TfD and CfD (namely orphaning templates and emptying/moving/merging categories). Those extra steps aren't present at RfD. At most, there are times when it's appropriate to unlink the redirect or add ]s but those are automated steps that ] handles. From my limited experience at TfD and CfD though, it does seem that the extra work needed at closure does not compensate for the extra work from needing two people reviewing the closure (especially at CfD because a bot that handles the clean-up). Consistency has come up and I would much rather consistently disallow non-admin delete closures at all XfD venues. I know it's tempting for non-admins to think they're helping by enacting these closures but it's not fair for them to be spinning their wheels. As for moving redirects, that's even messier than deleting them. There's a reason that ] advises not to move redirects except for limited cases when preserving history is important. --] <sup>(])</sup> 20:16, 7 January 2025 (UTC) | |||
::@]: I do have one objection to this point of redundancy, which you are ]. Here, an AfD was closed as "transwiki and delete", however, the admin who did the closure does not have the technical ability to transwiki pages to the English Wikibooks, meaning that I, who does, had to determine that the outcome was actually to transwiki rather than blindly accepting a request at ]. Then, I had to mark the pages for G6 deletion, that way an admin, in this case you, could determine that the page was ready to be deleted. Does this mean that that admin who closed the discussion shouldn't have closed it, since they only have the technical ability to delete, not transwiki? Could I have closed it, having the technical ability to transwiki, but not delete? Either way, someone else would have had to review it. Or, should only people who have importing rights on the target wiki ''and'' admin rights on the English Misplaced Pages be allowed to close discussions as "transwiki and delete"? ]<sub>]<sub>]</sub></sub> (]/]) 12:04, 8 January 2025 (UTC) | |||
*I do support being explicit when a non-administrator can close a discussion as "delete" and I think that explicitly extending to RfD and CfD is appropriate. First, there can be a backlog in both of these areas and there are often few comments in each discussion (and there is usually not the same passion as in an AfD). Second, the delete close of a non-administrator is reviewed by an administrator before action is taken to delete the link, or category (a delete close is a two-step process, the writeup and the delete action, so in theory the administrators workload is reduced). Third, non-admins do face ] for their actions, and can be subject to sanction. Fourth, the community has a role in reviewing closing decisions at DRV, so there is already a process in place to check a unexperienced editor or poor close. Finally, with many, if not most discussions for deletion the outcome is largely straight forward. --] (]) 20:01, 7 January 2025 (UTC) | |||
*There is currently no rule against non-admin delete closures as far as I know; the issue is the practical one that you don't have the ability to delete. However, I ''have'' made non-admin delete closures at AfD. This occurred when an admin deleted the article under consideration (usually for COPYVIO) without closing the related AfD. The closures were not controversial and there was no DRV. ] ] 20:31, 7 January 2025 (UTC) | |||
== ] == | |||
::The situation you're referring to is an exception allowed per ]: {{tq|If an administrator has deleted a page (including by speedy deletion) but neglected to close the discussion, anyone with a registered account may close the discussion provided that the administrator's name and deletion summary are included in the closing rationale.}} --] <sup>(])</sup> 20:37, 7 January 2025 (UTC) | |||
*Bad idea to allow, this sort of closure is just busy work, that imposes more work on the admin that then has to review the arguments, close and then delete. ] (]) 22:05, 7 January 2025 (UTC) | |||
* Is this the same as ] above? ]] 23:04, 7 January 2025 (UTC) | |||
**Yes, ]. Same issue coming from the same ]. ] (]) 03:52, 8 January 2025 (UTC) | |||
* (1) As I've also ], the deletion process guidelines at ] do say non-admins shouldn't do "delete" closures and do recognize exceptions for CfD and TfD. There isn't a current inconsistency there between guidelines and practice. <br>(2) In circumstances where we do allow for non-admin "delete" closures, I would hope that the implementing admin isn't fully ] before implementing, but rather giving deference to any reasonable closure. That's how it goes with ] closers asking for technical help implementing a "moved" closure at ] (as noted at ], the closure will "generally be respected by the administrator (or page mover)" but can be reverted by an admin if "clearly improper"). ] ] 08:41, 9 January 2025 (UTC) | |||
*'''Comment''' - A couple things to note about the CFD process: It very much requires work by admins. The non-admin notes info about the close at WT:CFD/Working, and then an admin enters the info on the CFD/Working page (which is protected) so that the bot can perform the various actions. Remember that altering a category is potentially more labour intensive than merely editing or deleting a single page - every page in that category must be edited, and then the category deleted. (There are other technical things involved, like the mess that template transclusion can cause, but let's keep it simple.) So I wouldn't suggest that that process is very useful as a precedent for anything here. It was done at a time when there was a bit of a backlog at CfD, and this was a solution some found to address that. Also - since then, I think at least one of the regular non-admin closers there is now an admin. So there is that as well. - <b>]</b> 09:14, 9 January 2025 (UTC) | |||
Members of the community may be interested in knowing that there is currently a discussion at ] over the guidelines for naming US city articles. The debate starts at the section called "Requested Moves", and continues through several subsections. '''] ]''' <sup>]</sup> 01:33, 22 August 2007 (UTC) | |||
*If the expectation is that an admin needs to review the deletion discussion to ensure they agree with that outcome before deleting via G6, as multiple people here are suggesting, then I'm not sure this is worthwhile. However, I have had many admins delete pages I've tagged with G6, and I have been assuming that they only check that the discussion was indeed closed as delete, and trust the closer to be responsible for the correctness of it. This approach makes sense to me, because if a non-admin is competent to close and be responsible for any other outcome of a discussion, I don't see any compelling reason they can't be responsible for a delete outcome and close accordingly. <span style="white-space: nowrap;">—] <sup>(]·])</sup></span> 19:51, 9 January 2025 (UTC) | |||
*:Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. ] holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—] <small>]/]</small> 11:28, 12 January 2025 (UTC) | |||
*::{{tpq|if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC}} alternatively you should consider becoming an administrator yourself. ] (]) 13:20, 12 January 2025 (UTC) | |||
*:::<small>If you're willing to tolerate the RFA process.—] <small>]/]</small> 15:24, 12 January 2025 (UTC)</small> | |||
*::In all the cases I have dealt with, the admin's reason for deletion (usually copyvio) was completely different to the issues being debated in the AfD (usually notability). The closing statement was therefore something like "Discussion is now moot due to article being deleted for <reason> by <admin>". ] ] 20:10, 14 January 2025 (UTC) | |||
*I think most all the time, experienced closers will do a great job and that will save admin time because they will not have to construct and explain the close from scratch, but there will be some that are bad and that will be costly in time not just for the admin but for the project's goal of completing these issues and avoiding disruption. I think that lost time is still too costly, so I would oppose non-admin delete closes. (Now if there were a proposal for a process to make a "delete-only admin permission" that would be good -- such motivated specialists would likely be more efficient.) ] (]) 16:44, 12 January 2025 (UTC) | |||
* As I said at the "Non-Admin XFD Close as Delete" section, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too, especially considering RfD backlogs. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. I don't see a significant difference between non-admins closing discussions as Delete vs non-Delete. It will help making non-admins mentally prepared to advance to admin roles.<span style="font-family:Segoe Script">]</span><span style="font-size:115%">]</span> 14:53, 14 January 2025 (UTC) | |||
* The backlog at RFD is mostly lack of participation, not lack of admins not making closures. This would only be exacerbated if non-admins are given a reason not to !vote on discussions trending toward deletion so they can get the opportunity to close. RFD isn't as technical as CFD and TFD. In any case, any admin doing the deletion would still have to review the RFD. Except in the most obviously trivial cases, this will lead to duplicate work, and even where it doesn't (e.g. multiple !votes all in one direction), the value-add is minimal. | |||
:-- ] - <sup>]</sup>/<sub>]</sub> 16:34, 20 January 2025 (UTC) | |||
== Modifying the first sentence of BLPSPS == | |||
==Copyright on highway shields== | |||
I have created ] as a page to discuss and determine the copyright status of logos for highways, mainly toll roads. Please help, especially if you are familiar with copyright law. Thank you. --] 03:57, 22 August 2007 (UTC) | |||
:Note to UK, Irish, Austalisian, South African, and other English speaking editors outside of North America - this is likely to involve Highways in the United States (and possibly Canada) only. ] 12:47, 23 August 2007 (UTC) | |||
{{FYI}} A discussion has been started at ] re: modifying the text of BLPSPS. ] (]) 14:23, 13 January 2025 (UTC) | |||
== How to add a new list or glossary of terms (B2B technical terms) standards == | |||
== Upgrade ] to an official guideline == | |||
Hi, | |||
{{Discussion top|result= {{Moved discussion to|Misplaced Pages talk:WikiProject Albums/Album article style advice|2=] (] | ]) 21:10, 15 January 2025 (UTC)}}}} | |||
] is an essay. I've been editing since 2010, and for the entire duration of that, this essay has been referred to and used extensively, and has even guided discussions regarding ascertaining if sources are reliable. I propose that it be formally upgraded to a status as an MOS guideline parallel to ].--] (] | ]) 14:28, 13 January 2025 (UTC) | |||
:I'm broadly in favor of this proposal—I looked over the essay and most of it is aligned with what seems standard in album articles—but there are a few aspects that feel less aligned with current practice, which I'd want to reexamine before we move forward with promoting this: | |||
I would like to understand and get more information as to how would I go about publishing a glossary on to WIKIpedia. My company and other people from other high tech insdustries have a long list of terms associated with B2B technology standards that we would like to publish on here. Is there any upfront cost if any? | |||
:* The section ] suggests {{tq|What other works of art is this producer known for?}} as one of the categories of information to include in a recording/production section. This can be appropriate in some cases (e.g., the '']'' article discusses how Butch Vig's work with Killdozer inspired Nirvana to try and work with him), but recommending it outright seems like it'd risk encouraging people to ]. My preference would be to cut the sentence I quoted and the one immediately following it. | |||
:* The section ] suggests that the numbered-list be the preferred format for track listings, with other formats like {{tl|Track listing}} being alternative choices for "more complicated" cases. However, in my experience, using {{tlg|Track listing|nolink=yes}} rather than a numbered list tends to be the standard. All of the formatting options currently listed in the essay should continue to be mentioned, but I think portraying {{tlg|Track listing|nolink=yes}} as the primary style would be more reflective of current practice. | |||
:* The advice in the ] section seems partially outdated. In my experience, review aggregators like Metacritic are conventionally discussed in the "Critical reception" section instead these days, and I'm uncertain to what extent we still link to databases like Discogs even in ELs. | |||
:(As a disclaimer, my familiarity with album articles comes mostly from popular-music genres, rock and hip-hop in particular. I don't know if typical practice is different in areas like classical or jazz.) Overall, while I dedicated most of my comment volume to critiques, these are a fairly minor set of issues in what seems like otherwise quite sound guidance. If they're addressed, it's my opinion that this essay would be ready for prime time. ] (] • ]) 15:19, 13 January 2025 (UTC) | |||
::I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--] (] | ]) 16:57, 13 January 2025 (UTC) | |||
:::Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) ] ] 17:01, 13 January 2025 (UTC) | |||
::::Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--] (] | ]) 18:13, 13 January 2025 (UTC) | |||
:::::They came to WT:ALBUMS upset that another editor was changing track lists from "numbered" to "template" formats. My main response was surprised, because in my 15+ years of article creations and rewrites, I almost exclusively used the tracklist template, and had never once received any pushback. | |||
:::::So basically, I personally agree with you and MDT above, I'm merely saying I've heard someone disagree. I'll try to dig up the discussion. ] ] 17:50, 14 January 2025 (UTC) | |||
::::::I found , though this was more about sticking to the current wording as is than it was about opposition against changing it. Not sure if there was another one or not. ] ] 18:14, 14 January 2025 (UTC) | |||
::::I remember one editor being strongly against the template, but they are now community banned. Everyone else I've seen so far uses the template. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 22:25, 13 January 2025 (UTC) | |||
::::I can see the numbered-list format being used for very special cases like '']'', which was released with only two songs, and had the same co-writers and producer. But I imagine we have extremely few articles that are like that, so I believe the template should be the standard. ] 🦗🐜 <sup><small>]'']</small></sup> 12:23, 14 January 2025 (UTC) | |||
:::{{u|ModernDayTrilobite}}, regarding linking to ], some recent discussions I was in at the end of last year indicate that it is common to still link to Discogs as an EL, because it gives more exhaustive track, release history, and personnel listings that Misplaced Pages - generally - should not.--] (] | ]) 14:14, 15 January 2025 (UTC) | |||
::::Thank you for the clarification! In that case, I've got no objection to continuing to recommend it. ] (] • ]) 14:37, 15 January 2025 (UTC) | |||
::There were several discussions about Discogs and an RfC ]. As a user of {{tl|Discogs master}}, I agree with what other editors said there. We can't mention every version of an album in an article, so an external link to Discogs is invaluable IMO. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 22:34, 13 January 2025 (UTC) | |||
:We badly need this to become part of the MOS. As it stands, some editors have rejected the guidelines as they're just guidelines, not policies, which defeats the object of having them in the first place. ] (]) 16:59, 13 January 2025 (UTC) | |||
::I mean, they are guidelines, but deviation per ] should be for a good reason, not just because someone feels like it.--] (] | ]) 18:14, 13 January 2025 (UTC) | |||
:I am very much in favor of this becoming an official MOS guideline per ] above. Very useful as a template for album articles. ] (]) 21:03, 13 January 2025 (UTC) | |||
:I recently wrote my first album article and this essay was crucial during the process, to the extent that me seeing this post is like someone saying "I thought you were already an admin" in RFA; I figured this was already a guideline. I would support it becoming one. ] (]) 02:00, 14 January 2025 (UTC) | |||
:I have always wondered why all this time these pointers were categorized as an essay. It's about time we formalize them; as said earlier, there are some outdated things that need to be discussed (like in ] which advises not to use stores for credits, even though in the streaming era we have more and more albums/EPs that never get physical releases). Also, song articles should also have their own guidelines, IMV. ] 🦗🐜 <sup><small>]'']</small></sup> 12:19, 14 January 2025 (UTC) | |||
::I'd be in favor of discussing turning the outline at the main page for ] into a guideline.--] (] | ]) 12:53, 14 January 2025 (UTC) | |||
:::I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. ] 🦗🐜 <sup><small>]'']</small></sup> 14:56, 14 January 2025 (UTC) | |||
::::Yes, I think it should be a separate, parallel guideline.--] (] | ]) 16:53, 14 January 2025 (UTC) | |||
:I think it needs work--I recall that a former longtime album editor, Richard3120 (not pinging them, as I think they are on another break to deal with personal matters), floated a rewrite a couple of years ago. Just briefly: genres are a perennial problem, editors love unsourced exact release dates and chronology built on OR (many discography pages are sourced only to random ''Billboard'', AllMusic, and Discogs links, rather than sources that provide a comprehensive discography), and, like others, I think all the permutations of reissue and special edition track listings has gotten out of control, as well as these long lists of not notable personnel credits (eight second engineers, 30 backing vocalists, etc.). Also agree that the track listing template issue needs consensus; if three are acceptable, then three are acceptable--again, why change it to accommodate the names of six not notable songwriters? There's still a divide on the issue of commercial links in the body of the article--I have yet to see a compelling reason for their inclusion (WP is, uh, not for sale, remember?), when a better source can always be found (and editors have noted, not that I've made a study of it, that itunes often uses incorrect release dates for older albums). But I also acknowledge that since this "floated" rewrite never happened, then the community at large may be satisfied with the guidelines. ] (]) 13:45, 14 January 2025 (UTC) | |||
::Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to ''Billboard'' or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- ] (] | ]) 13:53, 14 January 2025 (UTC) | |||
:::I meant that editors often use discography pages to justify chronology, even though ''Billboard'' citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. ] (]) 14:05, 14 January 2025 (UTC) | |||
::::Ah, okay, I understand now.--] (] | ]) 16:54, 14 January 2025 (UTC) | |||
Myself, I've noticed that some of the sourcing recommendations are contrary to WP:RS guidance (more strict, actually!) or otherwise outside consensus. For instance, MOS:ALBUMS currently says to not use vendors for track list or personnel credits, linking to ] in WP:RS, but AFFILIATE actually says that such use is acceptable but not preferred. Likewise, MOS:ALBUMS says not to use scans of liner notes, which is 1. absurd, and 2. not actual consensus, which in the discussions I've had is that actual scans are fine (which makes sense as it's a digital archived copy of the source).--] (] | ]) 14:05, 14 January 2025 (UTC) | |||
I appreciate any guidence and help on this matter. | |||
:The tendency to be overreliant on liner notes is also a detriment. I've encountered some liner notes on physical releases that have missing credits (e.g. only the producers are credited and not the writers), or there are outright no notes at all. Tangentially, some physical releases of albums like '']'' and '']'' actually direct consumers to official websites to see the credits, which has the added problem of link rot ( for ''Still Over It'' and is a permanent dead link). ] 🦗🐜 <sup><small>]'']</small></sup> 15:04, 14 January 2025 (UTC) | |||
thank you | |||
::That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "]". to check its credits and you'd find the name Sean Garrett -- , however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. ] 🦗🐜 <sup><small>]'']</small></sup> 15:11, 14 January 2025 (UTC) | |||
karen <small>—The preceding ] comment was added by ] (]) {{{Time|14:12, August 22, 2007 (UTC)}}}</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> | |||
:::Moreover, the credits in stores are not necessarily correct either. An example I encountered was on ], an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to ]'s artist page. It seemed <em>extremely</em> unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of ]. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 16:39, 14 January 2025 (UTC) | |||
::::{{u|PSA}} and {{u|AstonishingTunesAdmirer}}, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of ] being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--] (] | ]) 17:00, 14 January 2025 (UTC) | |||
:::::At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating ], which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). ] 🦗🐜 <sup><small>]'']</small></sup> 22:59, 14 January 2025 (UTC) | |||
::::::Yeah, I've found it particularly tricky when working on technical personnel (production, engineering, mixing, etc.) and songwriting credits for individuals. I usually use the liner notes (if there are any), check AllMusic and ], and also check Tidal if necessary. But I'll also look at Spotify, too. I know they're user-generated, so I don't cite them, but I usually look at Discogs and Genius to get an idea if I'm missing something. Thank you for pointing me to Songview, that will probably also be really helpful. ] (] | ]) 12:50, 15 January 2025 (UTC) | |||
:(@], please see ] for advice on advertising discussions about promoting pages to a guideline. No, you ''don't'' have to start over. But maybe add an RFC tag or otherwise make sure that it is very widely publicized.) ] (]) 23:37, 14 January 2025 (UTC) | |||
::Thank you. I'll notify the Manual of Style people. I did already post a notice at WP:ALBUMS. I'll inform other relevant WikiProjects as well.--] (] | ]) 12:46, 15 January 2025 (UTC) | |||
Before posting the RfC as suggested by {{u|WhatamIdoing}}, I'm proposing the following changes to the text of MOS:ALBUM as discussed above: | |||
:This is not the place to "publish" information. This is an encyclopedia and, as such, we report ''previously'' published information. If your glossary has already been published, you can write an article about it. If the terms have been published in some other glossery or glosseries you can write an article about the terms. If not, then you are out of luck. As for up front costs... um... YEEEAH... just forward $1,000,000 (payable to "Blueboar") to my Paypal account. :>) (Seriously - what part of "The ''free'' encyclopedia that anyone can edit" confuses you?). ] 15:35, 22 August 2007 (UTC) | |||
# Eliminate {{!xt|What other works of art is this producer known for? Keep the list of other works short, as the producer will likely have their own article with a more complete list.}} from the "Recording, production" sub-section. | |||
::This editor has also posted this same question at ] and on my talk page. I addressed it with much the same response as above, though money has not previously been mentioned. If only I had known; I could use the extra cash! '''''] ]''''' 18:09, 22 August 2007 (UTC) | |||
# Rework the text of the "Style and form" for tracklistings to: | |||
::{{xt|1=The track listing should be under a primary heading named "Track listing".}} | |||
::{{xt|1=A track listing should generally be formatted with the {{tl|Track listing}} template. Note, however, that the track listing template forces a numbering system, so tracks originally listed as "A", "B", etc., or with other or no designations, will not appear as such when using the template. Additionally, in the case of multi-disc/multi-sided releases, a new template may be used for each individual disc or side, if applicable.}} | |||
== Proposal Guideline/Policy == | |||
::{{xt|1=Alternate forms, such as a table or a ], are acceptable but usually not preferred. If a table is used, it should be formatted using class="wikitable", with column headings "No.", "Title" and "Length" for the track number, the track title and the track length, respectively (see Help:Table). In special cases, such as '']'', a numbered list may be the most appropriate format.}} | |||
I would like to dicuss here a proposal for a possible Guideline/Policy (Whatever it suits the best) on Misplaced Pages, labelled "Misplaced Pages:Don't edit for power." and this page is to warn that you should never edit[REDACTED] just for the purpose of gaining power to become an admin or such, because it's for building an encyclopedia, and if you try for power and fail. The result can drive editors mad and cause disputes, etc why[REDACTED] shouldn't be used for power. I would like feedback on this before I see if such a page should be created. ] 16:52, 22 August 2007 (UTC) | |||
:Feel free to write an ''essay'' on this (we have lots of essays that reflect the ideas of individual editors and give advice about how best to do things on wikipedia... one more won't kill us)... but I seriously doubt that it would ''ever'' be promoted to guideline/policy level. It just isn't the sort of idea that most people think should become 'official'. ] 17:20, 22 August 2007 (UTC) | |||
# Move {{xt|1= Critical reception overviews like AcclaimedMusic (using {{tl|Acclaimed Music}}), AnyDecentMusic?, or Metacritic may be appropriate as well.}} from "External links" to "Album ratings templates" of "Critical reception", right before the sentence about using {{tl|Metacritic album prose}}. | |||
I will write an essay then, but I would still like to see more opinions on this. ] 17:22, 22 August 2007 (UTC) | |||
# Re-write this text from "Sourcing" under "Track listing" from {{!xt|However, if there is disagreement, there are other viable sources. Only provide a source for a track listing if there are exceptional circumstances, such as a dispute about the writers of a certain track. Per ], avoid commercial sources such as online stores and streaming platforms. In the rare instances where outside citations are required, explanatory text is useful to help other editors know why the album's liner notes are insufficient.}} to {{xt|Per ], commercial sources such as online stores and streaming platforms are acceptable to cite for track list information, but secondary coverage in independent reliable sources is preferred if available.}} Similarly, in the "Personnel" section, re-write {{!xt| Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. In some cases, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. If you need to cite these, use {{tl|Cite AV media}} for the liner notes and do not use third party sources such as stores (per ]) or scans uploaded to image hosting sites or ] (per ]).}} to {{xt|1= Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. If you need to cite the liner notes, use {{tl|Cite AV media}}. Scans of the physical media that have been uploaded in digital form to repositories or sites such as ] are acceptable for verification, but cite the physical notes themselves, not the ] transcriptions. Frequently, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. Per ], inline citations to e-commerce or streaming platforms to verify personnel credits are allowed. However, reliable secondary sources are preferred, if available.}} | |||
# Additional guidance has been suggested for researching and verifying personnel and songwriting credits. I suggest adding {{xt|1=It is recommended to utilize a combination of the physical liner notes (if they exist) with e-commerce sites such as ] and ], streaming platforms such as ] and ], and databases such as ] credits listings and . Finding the correct credits requires careful, case-by-case consideration and editor discretion. If you would like assistance, you can reach out to ] or ] WikiProjects.}} The best section for this is probably in "Personnel", in the paragraph discussing that liner notes can be inaccurate. | |||
# The excessive listing of personnel has been mentioned. I suggest adding the following to the paragraph in the "Personnel" section beginning with "The credits to an album can be extensive or sparse.": {{xt|1=If the listing of personnel is extensive, avoid excessive, exhaustive lists, in the spirit of ]. In such cases, provide an external link to ] and list only the major personnel to the list.}} | |||
If you have any additional suggestions, or suggestions regarding the wording of any of the above (I personally think that four needs to be tightened up or expressed better), please give them. I'm pinging the editors who raised issues with the essay as currently written, or were involved in discussing those issues, for their input regarding the above proposed changes. {{u|ModernDayTrilobite}}, {{u|PSA}}, {{u|Sergecross73}}, {{u|AstonishingTunesAdmirer}}, {{u|Caro7200}}, what do you think? Also, I realize that I never pinged {{u|Fezmar9}}, the author of the essay, for their thoughts on upgrading this essay to a guideline.--] (] | ]) 17:21, 15 January 2025 (UTC) | |||
:The proposed edits all look good to me. I agree there's probably some room for improvement in the phrasing of #4, but in my opinion it's still clear enough as to be workable, and I haven't managed to strike upon any other phrasings I liked better for expressing its idea. If nobody else has suggestions, I'd be content to move forward with the language as currently proposed. ] (] • ]) 17:37, 15 January 2025 (UTC) | |||
:Weeding out otherwise productive editors for "bad motives" is like outlawing money for "making people too greedy". Moreover, if someone wants to single-handedly create fifteen "featured articles" just for the ''chance'' to become an admin, I'd say "more power to ya." | |||
:It might be better to have this discussion on its talk page. That's where we usually talk about changes to a page. ] (]) 17:38, 15 January 2025 (UTC) | |||
::{{u|WhatamIdoing}} - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--] (] | ]) 18:21, 15 January 2025 (UTC) | |||
:::It would be normal to have both discussions (separately) on that talk page. ] (]) 18:53, 15 January 2025 (UTC) | |||
::::Okay, thank you. I started the proposal to upgrade the essay here, as it would be far more noticed by the community, but I'm happy for everything to get moved there.-- ] (] | ]) 19:00, 15 January 2025 (UTC) | |||
:These changes look good to me. Although, since we got rid of Acclaimed Music in the articles, we should probably remove it here too. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 19:36, 15 January 2025 (UTC) | |||
::Sure thing.--] (] | ]) 20:56, 15 January 2025 (UTC) | |||
{{Discussion bottom}} | |||
:One principle frequently articulated around WP-land is "comment on contributions, not contributors". If you feel a contributor (be it an admin or anyone else) has made a ''specific'' contribution that goes against WP standards and policy, address the contribution itself. That's much more productive, because it's easy to misinterpret motives, and it's easy to misunderstand someone's intent. | |||
== reverts all edits == | |||
:Unless you have a clear and blatant track-record of ''specific'' incidents suggesting someone is willfully disregarding WP policy, it's probably better to just ] and if possible ]. ] 17:34, 22 August 2007 (UTC) | |||
Hello everyone. I have an idea for the Misplaced Pages coders. Would it be possible for you to design an option that, with the click of a button, automatically reverts all edits of a disruptive user? This idea came to my mind because some people create disposable accounts to cause disruption in all their edits... In this case, a lot of time and energy is consumed by administrators and reverting users to undo all the vandalism. If there were a template that could revert all the edits of a disruptive user with one click, it would be very helpful. If you think regular users might misuse this option, you could limit it to Misplaced Pages administrators only so they can quickly and easily undo the disruption. ] (]) 17:31, 13 January 2025 (UTC) | |||
: While laudable I don't see it as practical and we just have too many well meaning but ambiguous policies and guidelines already. I too think that there are a lot of people out there looking for authority and/or validation in their lives and trying to find it here at WP. I see the buy little beavers packing their resumes in aspiration of getting a mop of honor. --] 19:34, 22 August 2007 (UTC) | |||
:Hi @], there's a script that does that: ]. Also, editors who use ] can single-click revert all consecutive edits of an editor. ] ] 17:44, 13 January 2025 (UTC) | |||
::Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. ] (]) 17:51, 13 January 2025 (UTC) | |||
:::That script requires the ] permission, which is available only for admins and other trusted users. Admins and other users with the tool have gotten in trouble for using it inappropriately. I never use it myself, as I find the rollback in Twinkle quite sufficient for my needs. ] 17:54, 13 January 2025 (UTC) | |||
:::(ec) I don't know about other languages. If you check the page I linked, you'll see that the script requires ]. ] ] 17:55, 13 January 2025 (UTC) | |||
::::@] Sorry. Does your ] can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? ] (]) 04:23, 14 January 2025 (UTC) | |||
:::::If you want this for the Persian Misplaced Pages, you should probably talk to ]. ] (]) 23:41, 14 January 2025 (UTC) | |||
::::::@] Thank you. ] (]) 07:11, 15 January 2025 (UTC) | |||
== Problem For Translate page == | |||
Hello everyone. I don’t know who is in charge for coding the Translate page on Misplaced Pages. But I wanted to send my message to the Misplaced Pages coders, and that is that in the Misplaced Pages translation system, the information boxes for individual persons (i.e personal biography box- see: ]) are not automatically translated, and it is time-consuming for Misplaced Pages users to manually translate and change the links one by one from English to another language. Please, could the coders come up with a solution for translating the information template boxes? Thank you. ] (]) 17:32, 13 January 2025 (UTC) | |||
People come to Wiki for data, editors come to Wiki for power. At some point, and it seems to have occurred, the needs of the editors will dominate and nothing submitted will be quite "good enough", or comply with the myriad of policies being promulgated. | |||
:Hi {{u|Hulu2024}}, this also applies to the section above. If your proposal only applies to the English Misplaced Pages then it is probably best to post it at ] in the first instance. If it is only about the Persian Misplaced Pages then you may wish to try there. If it is more general then you could try ], or, for more formal proposals, ]. ] (]) 18:51, 13 January 2025 (UTC) | |||
::@] Thank you. ] (]) 19:21, 13 January 2025 (UTC) | |||
== A discrimination policy == | |||
Gathering data is hard, editing by comparison is easy. But editors don't think so. So diversity is weeded out, fresh data sources are turned away, and Wiki stagnates into irrelevence. | |||
{{Discussion top|result= i quit this will go no where im extremely embarassed and feel horrible i dont think ill try again}} | |||
<s>Ani cases: | |||
As Kurt Vonnegut, Jr. put it, "So it goes." <small>—The preceding ] comment was added by ] (] • ]){{#if:21:46, August 22, 2007 (UTC)| 21:46, August 22, 2007 (UTC)}}.</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
* ] | |||
* ] | |||
* ] | |||
* | |||
I would like to start this proposal by saying that this concept was a proposal in 2009 which failed for obvious reasons. But in this year, 2025, we need it as its happened a bunch. its already under personal attacks but this I feel and a couple other Wikipedians that it should be codified as their is precedent for blocking users who discriminate. Here’s a list of the things I want to include in this policy. edit: This policy is intended to target blatant and admitted instances of discrimination. If the intent behind an action is ambiguous, users should continue to assume good until the intent is.<br> | |||
:To clarify, an editor is anyone who clicks "edit this page." You cannot contribute anything to Misplaced Pages without being an editor. It is quite unwise to suggest that the majority of people who contribute do so to gain some sort of power; I can assure you that I edit because I enjoy doing so. ] 08:15, 23 August 2007 (UTC) | |||
Just as being a member of a group does not give one special requirements to edit, it also does not endow any special privileges. One is not absolved of discrimination against a group just because one claims to be a member of that group. | |||
What counts as discrimination | |||
::Not only that, but anyone has the potential to influence almost any part of Misplaced Pages's system – from processes to policies – if they have sufficient wit and will to do so. But we each have to accept that there will be some things with which we don't agree, but they are set up a certain way for a reason and have widespread support. Nothing is perfect for everyone, and that is as true of Misplaced Pages as it is of real life. '''''] ]''''' 20:59, 23 August 2007 (UTC) | |||
* ] | |||
:::<minor_rant>True indeed, for example, Misplaced Pages will probably never be perfect for me unless and until people use more precise terms than "]" when editing articles. Perhaps promoting a ] is another putative means of projecting personal power.</minor_rant> ] 14:41, 24 August 2007 (UTC) | |||
* Disability-will define this further | |||
* Disease | |||
* ]-different from sex neurological <ref>{{Cite AV media |url=https://www.youtube.com/watch?v=fpGqFUStcxc |title=Let’s All Get Past This Confusion About Trans People |date=2022-06-06 |last=Professor Dave Explains |access-date=2025-01-15 |via=YouTube}}</ref><ref>{{Cite journal |last=Altinay |first=Murat |last2=Anand |first2=Amit |date=2020-08-01 |title=Neuroimaging gender dysphoria: a novel psychobiological model |url=https://link.springer.com/article/10.1007/s11682-019-00121-8 |journal=Brain Imaging and Behavior |language=en |volume=14 |issue=4 |pages=1281–1297 |doi=10.1007/s11682-019-00121-8 |issn=1931-7565}}</ref> | |||
* ]-different then gender biological<ref>{{Cite AV media |url=https://www.youtube.com/watch?v=fpGqFUStcxc |title=Let’s All Get Past This Confusion About Trans People |date=2022-06-06 |last=Professor Dave Explains |access-date=2025-01-15 |via=YouTube}}</ref> | |||
* Sexuality | |||
* Religion | |||
* Hobbies (e.g furry ( most often harassed hobby)) | |||
* Relationship status | |||
* Martial status | |||
* (Idk how to word this but) lack of parental presence | |||
* Political position (will be a hot topic) | |||
* ] anything i missed would be in there | |||
== Referencing of Main Article callouts == | |||
A disability is an umbrella term in my sight | |||
When a lengthy article calls out the template <nowiki>{{main|subtopic}}</nowiki>, there are often well referenced citations in the called-out subtopic. In order to provide an inline synopsis, the lengthy article often winds up replicating the cites for a questionable improvement in ]. It seems to this humble puppy that we would be better off to have an identified synopsis ''in the subtopic article'' which can be automatically inserted by the call out, keeping all the similar refs in one place. For illustration of some of the issues consider ] and its call outs under ''War crimes'' or the less controversial section ''Literature and movies''. Am I missing a policy/guideline on this topic?] 20:55, 22 August 2007 (UTC) | |||
you have mental and physical | |||
== ] == | |||
examples for mental would be: | |||
I believe this user page debate should get more attention. Editors should weigh in on whether this sort of ] violates ] and/or ]. If such lists ''are'' uncivil, I think we should ask whether it would also be uncivil for a users to post them on public talk pages. ] '']'' 05:05, 23 August 2007 (UTC) | |||
*An interesting question is what people seek to gain by having such lists (1) visible to the public, and (2) in a place where the subject of said lists can't practically edit them. ] 08:22, 23 August 2007 (UTC) | |||
*The answer is, don't criticize a government using resources the government controls, regardless of whether the criticism is valid. Considering your list is focused on a single issue (global warming), calling it "balance check" is simply a misrepresentation.--] 16:37, 23 August 2007 (UTC) | |||
* schizophrenia | |||
== Academic updating their own article == | |||
* autism | |||
* ADHD | |||
* PTSD | |||
* mood disorders (depression, borderline personality disorder) | |||
* dyslexia (or any learning disability) | |||
* | |||
examples of physical: | |||
Under what circumstances is it appropriate for an academic to update their own article here, with new publications, new interviews, new lectures etc?--] 20:14, 23 August 2007 (UTC) | |||
* paralyzation | |||
:See ]. The route suggested is for changes to be brought up on the article's talkpage and then integrated into the article by independent editors. (Having said that, something like the publication of a new book is easily verifiable and I personally wouldn't see any problem with autobiography in a case like that.) --ⁿɡ͡b ]<span style="padding: 0 0.1em;">\</span><sup style="font-size: 70%;">]</sup> 20:20, 23 August 2007 (UTC) | |||
* Pretty much any physical injury | |||
* Im aware that this never really happens but its good to go over | |||
A user may not claim without evidence that a user is affected by/are any of the above (idk how to term this). | |||
::Objective and readily verifiable information like the above-mentioned publication of a book is OK, though it has to be prudently sifted so that only the person's most notable stuff is included. I'm far more concerned about cases I've seen where academics have blatantly whitewashed their articles or filled them with puffery. (Not at all to say that such sins are confined to academics.) ] 20:24, 23 August 2007 (UTC) | |||
A user may not claim that users with these disabilities/beliefs/races/genders shouldn’t edit Misplaced Pages. | |||
== New policy == | |||
A user may not imply a user is below them based on the person. | |||
Upon reading[REDACTED] articles, I have a concern and an idea for a new Misplaced Pages policy. The policy is based around the notability of places. Because many articles on places on[REDACTED] are on non-notable places often with no importance at all. Etc city estates, streets. If you see ]. You will see that it doesn't have anything with it to assert why it has an article on wikipedia. It just describes what is in the area, which is pretty lame. So should there be a policy on the notability of places on Misplaced Pages? For example, only established, towns, villages, cities and famous Geograghical locations should have articles. Not streets and non-famous local housing wards. ] 12:04, 24 August 2007 (UTC) | |||
:Individual streets are another matter, but settlements are deemed to be inherently notable enough by consensus. '''''] ]''''' 12:44, 24 August 2007 (UTC) | |||
calling people woke simply cause they are queer is discrimination. | |||
::This is what I mean. A policy to restrict articles on places, hence some are notable some are not. All settlements of course can have an article; but however. I think settlements with populations under 1000 shouldn't be allowed. ] 12:45, 24 August 2007 (UTC) | |||
Also I would like to propose a condition. | |||
:::Yes, I can appreciate the point that you have raised, but I think that consensus would be against it. Applying a threshold of population would be too arbitrary, so you would have to find some other criterion/criteria. The current consensus actually works pretty well I think. '''''] ]''''' 12:50, 24 August 2007 (UTC) | |||
Over reaction to what you think is discrimination (accidental misgendering and wrong pronouns) and the user apologizes for it is not grounds for an entry at ani. | |||
::::Well, I think there should be a policy on places anyway. Not just notability, but on the accuracy of content, accuracy etc. A policy similar to ] but with places. ] 12:53, 24 August 2007 (UTC) | |||
This should be used as a guideline. | |||
:::::Clearly some degree of notability would trump this, e.g. ] in ]. The question becomes a philosophical one of whether there is a street anywhere devoid of any notable feature, history or inhabitant. How much value do we attribute to individual people's stories? I'm mindful of ''the ] girl'' whose remarkable eyes on a ] cover were famous around the world, yet she had no idea herself until decades later she was revisited. Consider also the ]. Misplaced Pages provides (by dint of its openness) a unique place for pooling of factoids that together reveal a story. To my mind if the article has place-related information not normally captured on a map, it's fair game. ] 13:22, 24 August 2007 (UTC) | |||
::::::More examples include ], a rural center where Leonardo was born or Roda de Isábena, which is a municipality in Spain of only 51 inhabitants but which has one of the oldest cathedrals and was an important medieval center. In other words, there are so many aspects that may make a town important that trying to include all of them in a policy would be, in my opinion, impossible and could pre-empt the inclusion of towns or villages which are in fact notable. Cheers. --] 13:46, 24 August 2007 (UTC) | |||
{{Quote box | |||
:::::::Not to pile on, but you also can't judge a place by its current population. I recently visited ]. Its a sad, but fascinating place. There is a good size network of brick roads capable of supporting some 200 or more houses. But, now, there are only a few buildings left and some of those are falling down. A hundred years ago, though, it was the area's rail hub. Then it got bypassed by the highway. The Afghan girl BTW is ]. ] 02:32, 25 August 2007 (UTC) | |||
| quote = discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them. | |||
| source = Misplaced Pages article on discrimination | |||
}} | |||
{{Paragraph break}}I would also like to say this would give us negative press coverage by right wing media and I’ll receive shit. But I don’t care i can deal with it ]] 16:37, 16 January 2025 (UTC)</s> | |||
::::::::I think I should change that bit. A policy basically on the notability of places, not judged by population, history, or location. Just notability. Example if an article on a non-notable street gets created. It gets deleted. ] 15:53, 25 August 2007 (UTC) | |||
*This largely seems like behavior that already is sanctionable per ] and ] (and the adoption of the latter drew complaints at the time that it in itself was already unnecessarily redundant with existing civility policy on en.wiki). What shortcomings do you see with those existing bodies of policy en force? <sub>signed, </sub>] <sup>]</sup> 16:45, 16 January 2025 (UTC) | |||
<small>(Reset indent)</small> As you probably know, the current system involves falling back on the general ] guideline when no more specific guideline applies. This sounds a bit kludgy when put like that, but it usually works just fine. An non-notable street would still be non-notable when measured in that way and very few notable places would fall through the gap, so to speak. '''''] ]''''' 16:12, 25 August 2007 (UTC) | |||
*:The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group ]] 16:57, 16 January 2025 (UTC) | |||
*::NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? <sub>signed, </sub>] <sup>]</sup> 17:07, 16 January 2025 (UTC) | |||
*:::Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community ]] 17:10, 16 January 2025 (UTC) | |||
*::::<small>Noting that I've now taken the time to read through the three cases listed at the top--two of them ended in NOTHERE blocks pretty quickly--I could see someone taking issue with the community's handling of RowanElder and Jwa05002, although it does seem that the discussion ultimately resulted in an indef block for one and an apparently sincere apology from the other. <sub>signed, </sub>] <sup>]</sup> 17:13, 16 January 2025 (UTC) </small> | |||
*:I think the real problem is that in order to block for any reason you have to take them to a place where random editors discuss whether they are a "net positive" or "net negative" to the wiki, which in principle would be a fair way to decide, but in reality is like the work of opening an RFC just in order to get someone to stop saying random racist stuff, and it's not worth it. Besides, remember the RSP discussion where the Daily Mail couldn't be agreed to be declared unreliable on transgender topics because "being 'gender critical' is a valid opinion" according to about half the people there? I've seen comments that were blatant bigoted insults beneath a thin veneer, that people did not take to ANI because it's just not worth the huge amount of effort. There really needs to be an easy way for administrators to warn (on first violation) and then block people who harass people in discriminatory ways without a huge and exhausting-for-the-complainer "discussion" about it -- and a very clear policy that says discrimination is not OK and is always "net negative" for the encyclopedia would reduce the complexity of that discussion, and I think is an important statement to make. | |||
*:By allowing it to be exhaustively debated whether thinly-veiled homophobic insults towards gay people warrant banning is Misplaced Pages deliberately choosing not to take a stance on the topic. A stance needs to be taken, and it needs to be clear enough to allow rapid and decisive action that makes people actually afraid to discriminate against other editors, because they know that it isn't tolerated, rather than being reasonably confident their targets won't undergo another exhausting ANI discussion. ] (]) 17:04, 16 January 2025 (UTC) | |||
*::Said better then i could say i agree wholeheartedly it happens way too much ]] 17:18, 16 January 2025 (UTC) | |||
*I agree that a blind eye shouldn't be turned against discrimination against groups of Misplaced Pages editors in general, but I don't see why we need a list that doesn't include social class but includes hobbies. The determining factor for deciding whether something is discrimination should be how much choice the individual has in the matter, which seems, in practice, to be the way ] is used. ] (]) 17:02, 16 January 2025 (UTC) | |||
*:I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. ] (]) 17:06, 16 January 2025 (UTC) | |||
*::Sorry this was just me spit balling i personally have been harassed over my hobbies ]] 17:07, 16 January 2025 (UTC) | |||
*@] Strong support in general (see above) but I strongly suggest you take this to the idea lab, because it's not written as a clear and exact proposal and it would probably benefit a lot from being developed into an RFC before taking it here. In the current format it probably can't pass because it doesn't make specific changes to policy. ] (]) 17:08, 16 January 2025 (UTC) | |||
==Deletion policy== | |||
*:Yeah sorry I’m new to this i was told to go here to get the ball rolling ]] 17:11, 16 January 2025 (UTC) | |||
*Wait...does this mean I won't be able to discriminate against people whose hobby is editing Misplaced Pages? Where's the fun in that? ] 17:09, 16 January 2025 (UTC) | |||
*:I guess not :3 ]] 17:13, 16 January 2025 (UTC) | |||
:In general, I fail to see the problem this is solving. The UCoC and other policies/guidelines/essays (such as ], ], and others) already prohibit discriminatory behavior. And normal conduct processes already have the ability to lay down the strictest punishment theoretically possible - an indefinite ban - for anyone who engages in such behavior. | |||
It is my opinion that the deletion policy needs to be looked at radically. Listing bands and other organisations a select few thing are 'irrelevant' and therefore delete is unfair. Misplaced Pages is great because of the endless amount of trivia in it. Some people, comparable to the (edited), have an almost sadistic habit of prowling through articles people are in the middle of working at and deleting them. Its just plain unfair and it needs to stop. ] 14:02, 24 August 2007 (UTC) | |||
:I do not like the idea of what amounts to bureaucracy for bureaucracy’s sake. That is the ''best'' way I can put it. At worst, this is virtue signaling - it’s waving a flag saying “hey, public and editors, Misplaced Pages cares about discrimination so much we made a specific policy about it” - without even saying the next part “but our existing policies already get people who discriminate against other editors banned, so this was not necessary and a waste of time”. I’ll happily admit I’m proven wrong if someone can show evidence of a case where actual discrimination was not acted upon because people were “concerned” it wasn’t violating one of those other policies. -bɜ:ʳkənhɪmez | ] | ] 20:56, 16 January 2025 (UTC) | |||
:<s>You've already been cautioned about personal attacks. Comparing people to the SS is an interesting way to use your ...</s> --]♠] 14:09, 24 August 2007 (UTC) | |||
::To clarify, all the comments about "why is this included" or "why is this not included" are part of the reason I'm against a specific policy like this. Any disruption can be handled by normal processes, and a specific policy will lead to wikilawyering over what is or is not discrimination. There is no need to try to define/specifically treat discrimination when all discriminatory behaviors are adequately covered by other policies already. -bɜ:ʳkənhɪmez | ] | ] 22:27, 16 January 2025 (UTC) | |||
*We should be relating to other editors in a kind way. But this proposal appears to make the editing environment more hostile with more blocking on the opinion of one person. We do discrimonate against those that use Misplaced Pages for wrong purposes, such as vandalism, or advertising. Pushing a particular point of view is more grey area. The proposal by cyberwolf is partly point of view that many others would disagree with. So we should concentrate policies on how a user relates to other editors, rather than their motivations or opinions. ] (]) 20:50, 16 January 2025 (UTC) | |||
* I think this is valuable by setting a redline for a certain sort of personal attack and saying, "this is a line nobody is permitted to cross while participating in this project." ] (]) 20:57, 16 January 2025 (UTC) | |||
* It is not possible for the content of a discussion to be "discriminatory". Discrimination is action, not speech. This proposal looks like an attempt to limit discourse to a certain point of view. That's not a good idea. --] (]) 21:13, 16 January 2025 (UTC) | |||
*:Discrimination can very much be speech. ] (]) 00:36, 17 January 2025 (UTC) | |||
*:: Nope. --] (]) 00:44, 17 January 2025 (UTC) | |||
*::: : "treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc". | |||
*:::So yes, that includes speech because you can treat people differently in speech. Speech is an act. '']''<sup>]</sup> 01:04, 17 January 2025 (UTC) | |||
*::::OK, look, I'll concede part of the point here. Yes, if I'm a dick to (name of group) but not to (name of other group), I suppose that is discrimination, but I don't think a discrimination policy is a particularly useful tool for this, because what I ''should'' do is not be a dick to anybody. | |||
*::::What I'm concerned about is that the policy would be used to assert that certain ''content'' is discriminatory. Say someone says, here's a reliable source that says biological sex is real and has important social consequences, and someone else says, you can't bring that up, it's discriminatory. Well, no, that's a category error. That sort of thing ''can't'' be discriminatory. --] (]) 01:29, 17 January 2025 (UTC) | |||
*:::just drop it ]] 01:23, 17 January 2025 (UTC) | |||
*I would remove anything to do with polical position. Those on the far-right should be discriminated against. '']''<sup>]</sup> 21:45, 16 January 2025 (UTC) | |||
:* The examples you use show that we've been dealing effectively without this additional set of guidelines; it would be more convincing that something was needed if you had examples where the lack of this policy caused bad outcomes. And I can see it being used as a hammer; while we're probably picturing "as a White man, I'm sure that I understand chemistry better than any of you lesser types" as what we're going after, I can see some folks trying to wield it against "as a Comanche raised on the Comanche nation, I think I have some insights on the Comanche language that others here are overlooking." As such, I'm cautious. -- ] (]) 21:49, 16 January 2025 (UTC) | |||
OK, edited, but can we discuss the matter at hand and not play around with semantics? ] 14:10, 24 August 2007 (UTC) | |||
*'''Comment'''. I am sorry that ] discrimination is being ignored here. ] (]) 21:54, 16 January 2025 (UTC). | |||
:<s>It's not semantics. Being ] is a policy you've repeatedly ignored several times today. You created an article that looked like a hoax. It was speedily deleted. Maybe it shouldn't have been deleted so quickly, but it didn't assert any ] and didn't include any ].</s> That a subject is ] should ''always'' be a requirement of an article, and no changes to deletion policy should be made that would change that. --]♠] 14:17, 24 August 2007 (UTC) | |||
*'''Not needed'''. Everything the proposal is talking about would constitute disruptive behavior, and we can block or ban someone for being disruptive already. No need to break disruption down into its component parts, and write rules for each. ] (]) 22:07, 16 January 2025 (UTC) | |||
{{reflist-talk}} | |||
{{Discussion bottom}} | |||
That is fair enough, but civility is something which must be returned, and I have not seen nearly enough of it. Many articles have been removed unjustly and without adequate explanation. In the Redboy article we were in the middle of improving it and I had just finished a big expansion only to find that the article was deleted, despite pleading for some time on its talk page. There is no civility in that and I was rightly annoyed. ] 14:25, 24 August 2007 (UTC) | |||
== Repeated false retirement == | |||
The problem with these endless policies is that a select few decide what is relevant and what is not. Obvious things such as hate articles should be deleted, or complete and utter spam, but anything saying something about anything should be left there. You don't have to look at an article about Redboy if you don't want to, but its none of your business to go around deleting the said article. ] 14:37, 24 August 2007 (UTC) | |||
:I understand your point Johnjoecavanagh, but you are wrong when you say that "it is not your business to go around deleting". According to ] certain articles may be deleted. If what you think is that the article was deleted unfairly according to that policy, then I think you should consult the person who deleted it. If still you think deletion was not correct, then you have other mechanisms. I understand your ''anger'' but recall that coming here and stating phrases like "a select few decide" may put the community against you instead of in your favour. It would be more constructive to argue in which points the policy was violated. Hope to have helped you. --] 14:49, 24 August 2007 (UTC) | |||
There is a user (who shall remain unnamed) who has "retired" twice and had the template removed from their page by other users because they were clearly still editing. They are now on their third "retirement", yet they last edited a few days ago. I don't see any policy formally prohibiting such behavior, but it seems extremely unhelpful for obvious reasons. ] 17:13, 16 January 2025 (UTC) | |||
Thanks for talking to me like a person and not with those endless templates. | |||
:Unless the material is harmful to Misplaced Pages or other users, users have considerable leeway in what they may post on their user page. Personally, I always take "retirement" notices with a grain of salt. If a user wants to claim they are retired even though they are still actively editing, I don't see the harm to anything but their credibility. If I want to know if an editor is currently active, I look at their contributions, not at notices on their user or talk page. ] 22:07, 16 January 2025 (UTC) | |||
My argument is that the deletion policy is unfair. Misplaced Pages is great for trivia and urban legends and I would like to see that restored. I have been here in guises before and have contributed to articles, its not fair though that a few 'committed' sysops feel it necessary to delete some articles. I think the deletion policy should be rolled back completely to simply weeding out hate articles etc. I'm in the middle of organising a petition and will get back to you when we get our first 100 signatores. ] 15:02, 24 August 2007 (UTC) | |||
:{{br}}I can't imagine that this calls for a policy. You're allowed to be annoyed if you want. No one can take that away from you. But I'm missing an explanation of why the rest of us should care. --] (]) 22:13, 16 January 2025 (UTC) | |||
:That's the good way to solve that problem. Good luck! Cheers. --] 15:07, 24 August 2007 (UTC) | |||
::This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. ] 22:41, 16 January 2025 (UTC) | |||
:::A "retirement notice" from a Misplaced Pages editor is approximately as credible as a "retirement notice" from a famous rock and roll band. Ignore it. ] (]) 03:01, 20 January 2025 (UTC) | |||
:::FWIW, those two other editors were in the wrong to edit another person's user page for this kind of thing. And the retired banner ''does'' indicate: don't expect a quick response, even if I made an edit a few days or even minutes ago, as I may not be around much. ] (]) 12:28, 20 January 2025 (UTC) | |||
:There's a lot of active editors on the project, with retirement templates on their user pages. ] (]) 03:11, 20 January 2025 (UTC) | |||
:I think it's kind of rude to edit someone else's user page unless there is an extreme reason, like reversing vandalism or something. On ] I don't see anything about retirement templates, but i do see it say "In general, one should avoid substantially editing another's user and user talk pages, except when it is likely edits are expected and/or will be helpful. If unsure, ask." If someone wants to identify as retired but sometimes drop by and edit, that doesn't seem to hurt anything. ] <sup> (]) </sup> 03:56, 20 January 2025 (UTC) | |||
:Misplaced Pages is ], so even a "non-retired" editor might never edit again. And if someone is "retired" but still constructively edits, just consider that a bonus. What's more problematic is a petulant editor who "retires", but returns and edits disruptively; in such case, it's their disruptive behavior that would be the issue, not a trivial retirement notice. —] (]) 07:42, 20 January 2025 (UTC) | |||
*As far as Misplaced Pages is concerned it's just another userbox you can put on your userpage. We only remove userboxes and userspace material if they're claiming to have a right that they don't (ie. a user with an Administrator toolbox who isn't an admin). Retirement is not an official term defined in policy anywhere, and being retired confers no special status. '''] ]''' 11:13, 20 January 2025 (UTC) | |||
:If you see a retirement template that seems to be false you could post a message on the user talk page to ask if they are really retired. I suppose it could be just a tiny bit disruptive if we cannot believe such templates, but nowhere near enough to warrant sanctions or a change in policy. ] (]) 13:39, 20 January 2025 (UTC) | |||
== What is the purpose of banning? == | |||
:''"The problem with these endless policies is that a select few decide what is relevant and what is not."'' | |||
:I disagree. I think you should review the ] and participate on its ] with specific things you'd like to see changed. As far as I know, everyone is welcome to leave their input and suggestions. I don't believe that a complete overhaul needs to be done here...although I do also strongly disagree that "the deletion policy should be rolled back completely to simply weeding out hate articles." --]♠] 17:21, 24 August 2007 (UTC) | |||
In thinking about a recent banned user's request to be unblocked, I've been reading ] and ] trying to better understand the differences. In particular, I'm trying to better understand what criteria should be applied when deciding whether to end a sanction. | |||
I have just put the petition up there now. If anyone has read this and agree's with our point of view, please sign the petition: | |||
One thing that stuck me is that for blocks, we explicitly say {{tq|Blocks are used to prevent damage or disruption to Misplaced Pages, not to punish users}}. The implication being that a user should be unblocked if we're convinced they no longer present a threat of damage or disruption. No such statement exists for bans, which implies that bans ''are'' be a form of punishment. If that's the case, then the criteria should not just be "we think they'll behave themselves now", but "we think they've endured sufficiently onerous punishment to atone for their misbehavior", which is a fundamentally different thing. | |||
http://www.upetitions.com/petitions/index.php?id=195 | |||
I'm curious how other people feel about this. ] ] 16:15, 20 January 2025 (UTC) | |||
:My understanding (feel free to correct me if I am wrong) is that blocks are made by individual admins, and may be lifted by an admin (noting that CU blocks should only be lifted after clearance by a CU), while bans are imposed by ARBCOM or the community and require ARBCOM or community discussion to lift. Whether block or ban, a restriction on editing should only be imposed when it is the opinion of the admin, or ARBCOM, or the community, that such restriction is necessary to protect the encyclopedia from further harm or disruption. I thinks bans carry the implication that there is less chance that the banned editor will be able to successfully return to editing than is the case for blocked editors, but that is not a punishment, it is a determination of what is needed to protect WP in the future. ] 16:44, 20 January 2025 (UTC) | |||
It's not wikipedia's place to provide anyone with a platform to speak, in fact it's specifically ] supposed to be a soapbox. Rather, it's here to be an encyclopedia and a source of verifiable facts. Information that doesn't fit that ''ought'' to be deleted, and there's nothing oppressive or censoring about it. ] (]/]) 19:23, 24 August 2007 (UTC) | |||
:Good question. I'm interested in what ban evasion sources think about current policies, people who have created multiple accounts, been processed at SPI multiple times, made substantial numbers of edits, the majority of which are usually preserved by the community in practice for complicated reasons (a form of reward in my view - the community sends ban evading actors very mixed messages). What's their perspective on blocks and bans and how to reduce evasion? It is not easy to get this kind of information unfortunately as people who evade bans and blocks are not very chatty it seems. But I have a little bit of data from one source for interest, Irtapil. Here are a couple of views from the other side. | |||
:* On socking - "automatic second chance after first offense with a 2 week ban / block, needs to be easier than making a third one so people don't get stuck in the loop" | |||
:* On encouraging better conduct - "they need to gently restrict people, not shun and obliterate" | |||
:No comment on the merits of these views, or whether punishment is what is actually happening, or is required, or effective, but it seems clear that it is likely to be perceived as punishment and counterproductive (perhaps unsurprisingly) by some affected parties. ] (]) 17:31, 20 January 2025 (UTC) | |||
:Blocks are a sanction authorized by the community to be placed by administrators on their own initiative, for specific violations as described by a policy, guideline, or arbitration remedy (in which case the community authorization is via the delegated authority to the arbitration committee). Blocks can also be placed to enforce an editing restriction. A ban is an editing restriction. As described on the banning policy page, it is a {{tq|formal prohibition from editing some or all pages on the English Misplaced Pages, or a formal prohibition from making certain types of edits on Misplaced Pages pages. Bans can be imposed for a specified or an indefinite duration.}} Aside from cases where the community has delegated authority to admins to enact bans on their own initiative, either through community authorization of discretionary sanctions, or arbitration committee designated contentious topics, editing restrictions are authorized through community discussion. They cover cases where there isn't a single specific violation for which blocking is authorized by guidance/arbitration remedy, and so a pattern of behaviour and the specific circumstances of the situation have to be discussed and a community consensus established. | |||
:Historically, removing blocks and bans require a consensus from the authorizing party that removing it will be beneficial to the project. Generally, the community doesn't like to impose editing restrictions when there is promise for improved behaviour, so they're enacted for more severe cases of poor behaviour. Thus it's not unusual that the community is somewhat skeptical about lifting recently enacted restrictions (where "recent" can vary based on the degree of poor behaviour and the views of each community member). Personally I don't think this means an atonement period should be mandated. ] (]) 18:33, 20 January 2025 (UTC) | |||
*I think that a block is a preventive measure, whereas a ban is where the community's reached a consensus to uninvite a particular person from the site. Misplaced Pages is the site that anyone can edit, except for a few people we've decided we can't or won't work with. A ban is imposed by a sysop on behalf of the community whereas a block is imposed on their own authority.—] <small>]/]</small> 19:39, 20 January 2025 (UTC) | |||
*:A ban does not always stop you from editing Misplaced Pages. It may prohibit you from editing in a certain topic area (BLP for example or policies) but you can still edit other areas. ] (solidly non-human), ], ] 00:24, 23 January 2025 (UTC) | |||
*Seems to be addressed in ], which explains that the criteria is ''not'' dependent upon an editor merely ''behaving'' with what appears to be "{{tq|good or good-faith edits}}". A ban is based on a persistent or long-term pattern of editing behavior that demonstrates a significant risk of "{{tq|disruption, issues, or harm}}" to the area in which they are banned from, despite any number of positive contributions said editor has made or is willing to make moving forward. As such, it naturally requires a higher degree of review (i.e. a form of community consensus) to be imposed or removed (though many simply expire upon a pre-determined expiration date without review). While some may interpret bans as a form of punishment, they are still a preventative measure at their core. At least that's my understanding. --] (]) 12:59, 21 January 2025 (UTC) | |||
== Contacting/discussing organizations that fund Misplaced Pages editing == | |||
Misplaced Pages ceased to be an encyclopedia as we know it years ago, its much more now. If you want to understand academic persuits you go to a library, or encyclopedia Brittanica. Rather it is now a collection of ''all'' human knowledge, be that an urban legend or obscure punk band. ] 19:34, 24 August 2007 (UTC) | |||
I have seen it asserted that contacting another editor's employer is always harassment and therefore grounds for an indefinite block without warning. I absolutely get why we take it seriously and 99% of the time this norm makes sense. (I'm using the term "norm" because I haven't seen it explicitly written in policy.) | |||
:I suggest you take a look at ], which specifically notes that[REDACTED] is ''not'' an indiscriminate collection of knowledge. What you suggest would be a fundamental change in the mission of wikipedia. I'm not saying it shouldn't be changed (though I personally don't think it should be), just that that's far too big a change to stand much chance of being made. ](]) 20:33, 24 August 2007 (UTC) | |||
In some cases there is a conflict between this norm and the ways in which we handle disruptive editing that is funded by organizations. There are many types of organizations that fund disruptive editing - paid editing consultants, corporations promoting themselves, and state propaganda departments, to name a few. Sometimes the disruption is borderline or unintentional. There have been, for instance, WMF-affiliated outreach projects that resulted in copyright violations or other crap being added to articles. | |||
I would agree about the policy change here. I've been working very hard on ] but, because the article has been poorly done in the past and deleted, my new article has been deleted. I managed to get it restored and added reliable sources but it was deleted again, and I think only because the article had been deleted in the past. Nobody read or commented on the article itself. Very often articles are deleted before anybody has a chance to add sources and prove notability - the very reason they're deleted! Also, some people simply go around[REDACTED] deleting articles and not contributing anything, which I think is really sad. We're supposed to be building something here to make it better, not taking stuff away. <small>—Preceding ] comment added by ] (] • ]) 22:29, August 24, 2007 (UTC)</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
We regularly talk ] and off-wiki about organizations that fund Misplaced Pages editing. Sometimes there is consensus that the organization should either stop funding Misplaced Pages editing or should significantly change the way they're going about it. Sometimes the WMF legal team sends cease-and-desist letters. | |||
The Speedy deletion policy is unfair, because it doesn't give time to people who don't have all the relevant facts at that time and date, but wish to work on it later in the day, and not all in one go. Its unfair because a new article is routinely deleted before someone can actually update it. I propose that instead of constantly deleting articles, we place tags on them so as to be automatically deleted within four days if something is not done to improve it or if references are not put in place. It is the only fair compromise. ] 22:34, 24 August 2007 (UTC) | |||
:Well, we do have ], which sort of works like that. Also, if an editor wants to work on an article over time to bring it up to minimum Misplaced Pages standards, they can always do that in their userspace before moving it to main space. That's how I develop all of the articles I create. -] 22:50, 24 August 2007 (UTC) | |||
::Articles "go live" ,if you will, immediately after you click save the first time. They should be presentable to the general public from the very first edit. I have not written that many articles (we have so many, focus should be on improvement) but cannot recall an instance where one of mine has been speedy deleted. The ] are not hard to understand and it is not difficult to make an article that is not speedy deletable. Even if the article just looks good (follows the ], may have an infobox, ] are good, etc.) people may give your article the benefit of the doubt. <font color="maroon">]</font>'''<small>]</small>''<font color="navy" face="cursive">]</font>''''' 02:31, 25 August 2007 (UTC) | |||
Now here's the rub: Some of these organizations employ Misplaced Pages editors. If a view is expressed that the organizations should stop the disruptive editing, it is foreseeable that an editor will lose a source of income. Is it harassment for an editor to say "Organization X should stop/modify what it's doing to Misplaced Pages?" at AN/I? Of course not. Is it harassment for an editor to express the same view in a social media post? I doubt we would see it that way unless it names a specific editor. | |||
Its become much more than that now. The website needs to fundamentally change to be more accepting of trivia and cultish topics. The base academics is all covered here,[REDACTED] is supposed to be a collection of all human knowledge. And I don't care what you say, Misplaced Pages is a democracy, since ''we'' the people pay for it through voluntary donations. ] 22:55, 24 August 2007 (UTC) | |||
Yet we've got this norm that we absolutely must not contact any organization that pays a Misplaced Pages editor, because this is a violation of the harassment policy. Where this leads is a bizarre situation in which we are allowed to discuss our beef with a particular organization on AN/I but nobody is allowed to email the organization even to say, "Hey, we're having a public discussion about you." | |||
:Paying for something does not give control, short of the fact that people will stop paying if they don't like it. It's not like it's paid for by any government, it's not coming out of anyone's taxes. Please read ] as has already been suggested, and consider what that says in your further contributions to these discussions. ](]) 23:09, 24 August 2007 (UTC) | |||
I propose that '''if an organization is reasonably suspected to be funding Misplaced Pages editing, contacting the organization should not in and of itself be considered harassment.''' I ask that in this discussion, we not refer to real cases of alleged harassment, both to avoid bias-inducing emotional baggage and to prevent distress to those involved. ] (] <nowiki>|</nowiki> ]) 03:29, 22 January 2025 (UTC) | |||
:Misplaced Pages is not a collection of all human knowledge. I know Jimbo says it is, but that's because Jimbo is publicizing Misplaced Pages. ''Everything'' does not get an article. While the article you wrote may have been about a notable topic, and if you can rewrite it so that it asserts its notability and has reliable sources (By the way, you can do this without disruption in your userspace. Either use your user page, or add a slash after the URL and add the title of your page.), then you are welcome to recreate it. If the article's subject is ultimately non-notable (if there are reliable secondary sources, it probably isn't though), someone may feel it should be deleted, as is their right according to Misplaced Pages policy, but that's not something you have to worry about if the deletion really was a mistake. ] 05:17, 25 August 2007 (UTC) | |||
*If it's needful to contact an organisation about one of their employees' edits, Trust and Safety should do that. Not volunteers.—] <small>]/]</small> 09:21, 22 January 2025 (UTC) | |||
I have read that. Its irrelevant if its coming out of anyones taxes or not. What is relevant is that we pay for it, and we deserve a say in how its run. Thats not unreasonable. ] 23:10, 24 August 2007 (UTC) | |||
*:Let's say Acme Corporation has been spamming Misplaced Pages. If you post on Twitter "Acme has been spamming Misplaced Pages" is that harassment? How about if you write "@Acme has been spamming Misplaced Pages?" Should only Trust and Safety be allowed to add the @ sign? ] (] <nowiki>|</nowiki> ]) 15:43, 22 January 2025 (UTC) | |||
*::What you post on Twitter isn't something Misplaced Pages can control. But contacting another editor's employer about that editor's edits has a dark history on Misplaced Pages.—] <small>]/]</small> 15:49, 22 January 2025 (UTC) | |||
*:::The history is dark indeed. What I'm pointing out is that writing "@Acme has been spamming Misplaced Pages" on Twitter '''is''' contacting another editor's employer. Should you be indef blocked without warning for doing that? ] (] <nowiki>|</nowiki> ]) 15:56, 22 January 2025 (UTC) | |||
*::::You want an "in principle" discussion without talking about specific cases, so the only way I can answer that is to say: Not always, but depending on the surrounding circumstances, possibly.—] <small>]/]</small> 16:11, 22 January 2025 (UTC) | |||
*:::::I agree. You said it better than I did. ] (] <nowiki>|</nowiki> ]) 18:56, 22 January 2025 (UTC) | |||
Another issue is that it sometimes doing that can place another link or two in a wp:outing chain, and IMO avoiding that is of immense importance. The way that you posed the question with the very high bar of "always" is probably not the most useful for the discussion. Also, a case like this is almost always involves a concern about a particular editor or center around edits made by a particular editor, which I think is a non-typical omission from your hypothetical example. Sincerely, <b style="color: #0000cc;">''North8000''</b> (]) 19:41, 22 January 2025 (UTC) | |||
:You do have a say. You're saying it right now. -] 23:13, 24 August 2007 (UTC) | |||
::And what about the people that have donated (myself included) that don't want to change things for no stated reason? Also, you are choosing to pay for it, it is a '''''donation'''''. If Bill Gates donates $3 billion to the Red Cross, great for him and the Red Cross. He can include it on his taxes and they can buy some bloodmobiles and provide emergency relief to a few thousand more people. He can't say however, after he donates it, that he only wants them to focus on disaster relief in Africa. Misplaced Pages is the same way. It is a non-profit organization, you ] the money, you are giving it away. <font color="maroon">]</font>'''<small>]</small>''<font color="navy" face="cursive">]</font>''''' 02:31, 25 August 2007 (UTC) | |||
:I'm not sure what you mean by placing a link in an outing chain. Can you explain this further? I used the very high bar of "always" because I have seen admins refer to it as an "always" or a "bright line" and this shuts down the conversation. Changing the norm from "is always harassment" to "is usually harassment" is exactly what I'm trying to do. | |||
:The foundation doesn't have shareholders; you're a donor, not a shareholder. The only reason you have any say is that everyone who cares to say anything has a say. Ultimately, the foundation has the last word because it's their servers. People give them money, they buy servers and run the service, but that doesn't mean that the donors have any right of equity over the service or the servers. I've never donated, but I have no less right to say or do anything on[REDACTED] than people who have. If you want to spend your own money settign up a SumOfAllKnowledgeWiki, then go ahead. Misplaced Pages isn't it. If you donate money to[REDACTED] wanting it to be that, then I suggest you not donate again. ](]) 23:21, 24 August 2007 (UTC) | |||
:Organizations that fund disruptive editing often hire just one person to do it but I've also seen plenty of initiatives that involve money being distributed widely, sometimes in the form of giving perks to volunteers. ''If'' the organization is represented by only one editor then there is obviously a stronger argument that contacting the organization constitutes harassment. ] (] <nowiki>|</nowiki> ]) 06:44, 23 January 2025 (UTC) | |||
== General reliability discussions have failed at reducing discussion, have become locus of conflict with external parties, and should be curtailed == | |||
My opinion, for what it is worth, is that some articles ARE speedied or prodded excessively, preventing editors from properly developing them. One example (which I recently noticed) involved a stub on an elected premier of an Indian state being prodded by someone <s>(I think it was an admin)</s> who managed to combine rudeness and a profound ignorance of Indian history, with a claim that orphan and dead-end articles should be deleted - my understanding was that orphan articles should be de-orphaned, and appropriate links inserted into dead-ends. I also believe that, occaisionally, deletion debates are improperly closed, wih eg. one participant calling for deletion, and 5 or 6 for keep, being closed as a delete, with no explanation by the closing admin as to why they have apparently ignored what the consensus appears to be. I do not contribute to deletion debates half as much as I would if I had confidence in the policies and their implementation. ] 23:24, 24 August 2007 (UTC) | |||
The original ] discussion, which set off these general reliability discussions in 2017, was supposed to reduce discussion about it, something which it obviously failed to do since we have had more than 20 different discussions about its reliability since then. Generally speaking, a review of ] does not support the idea that general reliability discussions have reduced discussion about the reliability of sources either. Instead, we see that we have repeated discussions about the reliability of sources, even where their reliability was never seriously questioned. We have had a grand total of 22 separate discussions about the reliability of the BBC, for example, 10 of which have been held since 2018. We have repeated discussions about sources that are cited in relatively few articles (e.g., Jacobin). | |||
:Now ''that'''s a valid point worthy of debate. That said, I have no idea how it could be dealt with. ](]) 23:31, 24 August 2007 (UTC) | |||
::We ''already have'' ways to deal with things like that. If an article is prodded and you disagree, you can remove the template. You don't even technically have to fix snything. If it is deleted before you can object to the prod, it can be immediately undeleted at ]. If you belive an article was improperly deleted at ], try ] or even contacting the deleting admin helps. Requiring discussion on every deletion case would be a disaster. If you don't believe me, watch ] for a few minutes. Also please note that AFD is not a vote. If 25 people say to keep an article about Joe Schmo because they think he is the ] and one person says to delete because the article is ], a ], and the person is ] - and all of that is true - the admin ''should'' delete the article. <font color="maroon">]</font>'''<small>]</small>''<font color="navy" face="cursive">]</font>''''' 02:31, 25 August 2007 (UTC) | |||
Moreover these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project. Most recently we have had an unnecessary conflict with the Anti-Defamation League sparked by a general reliability discussion with them, but the original Daily Mail discussion did this also. In neither case was usage of the source a problem generally on Misplaced Pages in any way that has been lessened by their deprecation - they were neither widely-used, nor permitted to be used in a way that was problematic by existing policy on using reliable sources. | |||
:::And just how many Wikipedians have any knowledge or understanding of those processes? Who actually bothers to tell new users what they can do to get their deleted articles restored? How many, for that matter, have the courtesy to inform an article's creator of a prod, or of what it actually means? I give out a lot of welcome boxes to new users, and the level of snobbery and rudeness that some of them are exposed to over articles they have created is appalling, especially from editors who seem to specialise in deletions. Those processes also effectively exclude editors who never had time to see the deleted article in the first place. I don't for one moment deny that some articles need to be deleted, but the process is not, in my opinion, working in a way that allows editors (especially new editors) to participate properly. I know perfectly well that AfD and CfD are not votes - but I have seen several debates (both in AfD and CfD) where I do not believe anything approaching a consensus to delete was obtained, and they were not cases of ''coolness'' ''unsourced'' or ''trivial'', and again I make the point that no explanation was given. It is entirely possible that there was a good reason for going against the consensus in the debate - but if the closer doesn't bother to explain it, then it is very hard for the rest of us to understand what that reason was. It would be fascinating if someone with the time and expertise were to analyze just how many editors are lost to Misplaced Pages after bad experiences over their first creations, or who shy away from creating articles because they are deterred by over-zealous deletions. ] 08:31, 25 August 2007 (UTC) | |||
::::When looking at a page that has been deleted, it includes the deletion log at the bottom so people can see the reason. There are also links to tutorial and help pages, as well as ] which includes steps to follow and links to ] if they think a deletion was conducted improperly. In my opinion, people should shy away from creating new articles. We already have 1.9 million +. Now is the time to improve existing ones. As of the last count, there were 83512 articles tagged as lacking sources, 5640 are tagged as NPOV disputes. | |||
::::It is really hard to address the concerns here because no suggestions are being made. We can't discuss everything, the ] number of pages in ] at any given time is is 197.9 - and pages there rarely stay for more than a few hours. Admins are given their tools because the ] trusts their discretion. While it is unfortunate that some people don't inform users that their pages are tagged for deletion (I think we have bots already for speedy and AFD, you could suggest more at ].) we can't make it a requirement, otherwise that would open up a really bad loophole and would make the deletion system even more backlogged. There are clear ways to contest deletion. The ] tags all include information about the {{tl|hangon}} template, {{tl|Prod}} says that anyone can remove the template to contest deletion, {{tl|Afd}} asks people to "share their thoughts" on the discussion page. I don't see how we can indicate more strongly that people can contest the deletion without saying "please contest this" in <font color=#FF4F00><big><u>'''big bold orange font'''</u></big></font> instead of "please share your thoughts." (Statistics from ]) <font color="maroon">]</font>'''<small>]</small>''<font color="navy" face="cursive">]</font>''''' 16:19, 25 August 2007 (UTC) | |||
There is also some evidence, particularly from ], that some editors have sought to "claim scalps" by getting sources they are opposed to on ideological grounds 'banned' from Misplaced Pages. Comments in such discussions are often heavily influenced by people's impression of the bias of the source. | |||
== ] == | |||
I think a the very least we need a ]-like requirement for these discussions, where the editors bringing the discussion have to show that the source is one for which the reliability of which has serious consequences for content on Misplaced Pages, and that they have tried to resolve the matter in other ways. The recent discussion about Jacobin, triggered simply by a comment by a Jacobin writer on Reddit, would be an example of a discussion that would be stopped by such a requirement. ] (]) 15:54, 22 January 2025 (UTC) | |||
] has been proposed as a new guideline. Are these people significantly different enough to merit a new guideline? Or is this ] --] 18:16, 24 August 2007 (UTC) | |||
*The purpose of this proposal is to reduce discussion of sources. I feel that evaluating the reliability of sources is the single most important thing that we as a community can do, and I don't want to reduce the amount of discussion about sources. So I would object to this.—] <small>]/]</small> 16:36, 22 January 2025 (UTC) | |||
**I don't thinks meant to reduce but instead start more discussions at a more appropriate level than at VPP or RSP. Starting the discussion at the VPP/RSP level means you are trying to get all editors involved, which for most cases isn't really appropriate ( eg one editor has a beef about a source and brings it to wide discussion before getting other input first). Foarp us right that when these discussion are first opened at VPP or RSP without prior attempts to resolve elsewhere is a wear on the process.<span id="Masem:1737564932296:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 16:55, 22 January 2025 (UTC)</span> | |||
***Oh, well that makes more sense. We could expand ] to cover WP:RSP?—] <small>]/]</small> 17:06, 22 January 2025 (UTC) | |||
***:Basically this. I favour something for RSP along the lines of ]/], an ] if you will. ] (]) 21:50, 22 January 2025 (UTC) | |||
*Yeah I would support anything to reduce the constant attempts to kill sources at RSN. It has become one of the busiest pages on all of Misplaced Pages, maybe even surpassing ANI. -- ]] 19:36, 22 January 2025 (UTC) | |||
*Oddly enough, I am wondering why this discussion is here? And not Talk RSN:], as it now seems to be a process discussion (more BEFORE) for RSN? ] (]) 22:41, 22 January 2025 (UTC) | |||
*Some confusion about pages here, with some mentions of RSP actually referring to RSN. RSN is a type of "before" for RSP, and RSP is intended as a summary of repeated RSN discussions. One purpose of RSP is to put a lid on discussion of sources that have appeared at RSN too many times. This isn't always successful, but I don't see a proposal here to alleviate that. Few discussions are started at RSP; they are started at RSN and may or may not result in a listing or a change at RSP. Also, many of the sources listed at RSP got there due to a formal RfC at RSN, so they were already subject to RFCBEFORE (not always obeyed). I'm wondering how many listings at RSN are created due to an unresolved discussion on an article talk page—I predict it is quite a lot. ]<sup><small>]</small></sup> 04:40, 23 January 2025 (UTC) | |||
*:“Not always obeyed” is putting it mildly. ] (]) 06:47, 23 January 2025 (UTC) | |||
== Primary sources vs Secondary sources == | |||
== RFC: Notability of years == | |||
{{main|Misplaced Pages talk:Manual of Style/Television#Episode Counts}} | |||
I am requesting comments on ], a policy which at present merely writes down what precedent has already said. It needs to address issues for which the precedent is unclear, such as the notability of fictional references to future years. ] 22:49, 24 August 2007 (UTC) | |||
The discussion above has spiralled out of control, and needs clarification. The discussion revolves around how to count episodes for TV series when a traditionally shorter episode (e.g., 30 minutes) is broadcast as a longer special (e.g., 60 minutes). The main point of contention is whether such episodes should count as one episode (since they aired as a single entity) or two episodes (reflecting production codes and industry norms). | |||
The simple question is: <u>when primary sources and secondary sources conflict, which we do use on Misplaced Pages?</u> | |||
== Reference styles == | |||
* The contentious article behind this discussion is at ], in which , and all state that the series has 100 episodes; article from TFC, which is a direct copy of the press release from Disney Channel, also states that the series has "100 half-hour episodes". | |||
I cannot seem to get this information from ] or similar areas. Which of these is correct? | |||
* The article has 97 episodes listed; the discrepancy is from three particular episodes that are all an hour long (in a traditionally half-hour long slot). These episode receive two production codes, indicating two episodes, but each aired as one singular, continuous release. An editor argues that the definition of an episode means that these count as a singular episode, and stand by these episode being the important primary sources. | |||
* Some statement. | |||
* The discussion above discusses what an episode is. Should these be considered one episode (per the primary source of the episode), or two episodes (per the secondary sources provided)? This is where the primary conflict is. | |||
* Some statement.<ref>http://www.google.com</ref> ... (with obviously a REF section on the page) | |||
* Multiple editors have stated that the secondary sources refer to the ''production'' of the episodes, despite the secondary sources not using this word in any format, and that the primary sources therefore override the "incorrect" information of the secondary sources. Some editors have argued that there are 97 episodes, because that's what's listed in the article. | |||
* ] has been cited; {{tq|Routine calculations do not count as original research, provided there is consensus among editors that the results of the calculations are correct, and a meaningful reflection of the sources}}. An editor argues that there is not the required consensus. ] was also cited. | |||
Another example was provided at ]. | |||
* The same editor arguing for the importance of the primary source stated that he would have listed this as one episode, despite a reliable source stating that there is 14 episodes in the season. | |||
* ] has been quoted multiple times: | |||
** {{tq|Misplaced Pages articles usually rely on material from reliable secondary sources. Articles may make an analytic, evaluative, interpretive, or synthetic claim only if it has been published by a reliable secondary source.}} | |||
** {{tq|While a primary source is generally the best source for its own contents, even over a summary of the primary source elsewhere, do not put undue weight on its contents.}} | |||
** {{tq|Do not analyze, evaluate, interpret, or synthesize material found in a primary source yourself; instead, refer to reliable secondary sources that do so.}} | |||
* Other quotes from the editors arguing for the importance of primary over secondary includes: | |||
** {{tq|When a secondary source conflicts with a primary source we have an issue to be explained but when the primary source is something like the episodes themselves and what is in them and there is a conflict, we should go with the primary source.}} | |||
** {{tq|We shouldn't be doing "is considered to be"s, we should be documenting what actually happened as shown by sources, the primary authoritative sources overriding conflicting secondary sources.}} | |||
** {{tq|Yep, secondary sources are not perfect and when they conflict with authoritative primary sources such as released films and TV episodes we should go with what is in that primary source.}} | |||
Having summarized this discussion, the question remains: when primary sources and secondary sources conflict, which we do use on Misplaced Pages? | |||
# Primary, as the episodes are authoritative for factual information, such as runtime and presentation? | |||
# Or secondary, which guide Misplaced Pages's content over primary interpretations? | |||
-- ]<sub> ]</sub> 22:22, 23 January 2025 (UTC) | |||
* As someone who has never watched ''Abbott Elementary'', the example given at ] would be confusing to me. If we are going to say that something with one title, released as a single unit, is actually two episodes we should provide some sort of explanation for that. I would also not consider reliable for the claim that there were 14 episodes in the season. It was published three months before the season began to air; even if the unnamed sources were correct when it was written that the season was planned to have 14 episodes, plans can change. ] (]) 10:13, 24 January 2025 (UTC) | |||
*: is an alternate source, after the premiere's release, that specifically states the finale episode as Episode 14. () And what of your thoughts for the initial argument and contested article, where the sources were also posted after the multiple multi-part episode releases? -- ]<sub> ]</sub> 10:48, 24 January 2025 (UTC) | |||
*::''Vulture'' does say there were 14 episodes in that season, but it also repeatedly describes "Career Day" (episode 1/2 of season 3) in the singular as "the episode" in and never as "the episodes". Similarly and refer to "the supersized premiere episode, 'Career Day'" and "the mega-sized opener titled 'Career Day Part 1 & 2'" respectively, and treat it largely as a single episode in their reviews, though both acknowledge that it is divided into two parts. | |||
*::If reliable sources {{em|do}} all agree that the one-hour episodes are actually two episodes run back-to-back, then we should conform to what the sources say, but that is sufficiently unexpected (and even the sources are clearly not consistent in treating these all as two consecutive episodes) that we do need to at least explain that to our readers. | |||
*::In the case of ''Good Luck Charlie'', while there clearly are sources saying that there were 100 episodes, none of them seem to say which episodes are considered to be two, and I would consider "despite airing under a single title in a single timeslot, this is two episodes" to be a claim which is likely to be challenged and thus require an inline citation per ]. I have searched and I am unable to find a source which supports the claim that e.g episode 3x07 "Special Delivery" is actually two episodes. ] (]) 12:18, 24 January 2025 (UTC) | |||
:If a series had 94 half-hour episodes and three of one hour why not just say that? ] (]) 11:04, 24 January 2025 (UTC) | |||
::What would you propose be listed in the first column of the tables at ], and in the infobox at ]? | |||
::Contentious article aside, my question remains as to whether primary or secondary sources are what we based Misplaced Pages upon. -- ]<sub> ]</sub> 11:11, 24 January 2025 (UTC) | |||
== Request for research input to inform policy proposals about banners & logos == | |||
I think the second one looks better and makes articles more consistent, but I still see the first style. Can I get some comments on this one? Thanks ] 02:55, 25 August 2007 (UTC) | |||
I am leading an initiative to review and make recommendations on updates to policies and procedures governing decisions to run project banners or make temporary logo changes. The initiative is focused on ensuring that project decisions to run a banner or temporarily change their logo in response to an “external” event (such as a development in the news or proposed legislation) are made based on criteria and values that are shared by the global Wikimedia community. The first phase of the initiative is research into past examples of relevant community discussions and decisions. If you have examples to contribute, please do so on ]. Thanks! --] (]) 00:04, 24 January 2025 (UTC) | |||
:The second one. It allows for precision and explanations: | |||
:@]: Was this initiative in the works before ar-wiki's action regarding Palestine, or was it prompted by that? ] (]/]) 02:03, 24 January 2025 (UTC) | |||
:*Some statement.<nowiki><ref>For the precise place of the invasion, see , which cites ; for the weather at the time, see , which cites unpublished papers held by .</ref></nowiki> | |||
:Et cetera. Note that after you've scrupulously entered your note, some well-intentioned blunderer may fiddle with the main text so that your footnote appears to source something other than what it really does source; thus a bit of explanation in the footnote can be a good idea (though tedious to type of course). -- ] 03:02, 25 August 2007 (UTC) | |||
::Another advantage of the second is that you can use the ], but even if you don't, you can still add explanatory notes (per Hoary), the authors, publication, date accessed, etc. Also if you use <nowiki><ref name="ref"> ... </ref>, then you can cite the same source more than once by using <ref name="ref" /> for additional occurrences.</nowiki> I do find the first type (inline citation) useful if I'm in a hurry, but I at least want to ref a source. -- ] ]</sup> ]</sub> 03:10, 25 August 2007 (UTC) | |||
:::I don't particularly recommend the templates (more typing/pasting involved) but their layout examples are very useful as a guide to best practice. The biggest issue with embedded URLs is that(obviously) they prevent inline citations for offline sources. Another advantage of footnoting is the {{tl|note}} system that allows you to split up notes and references. '''''] ]''''' 11:09, 25 August 2007 (UTC) | |||
== RfC: Amending ATD-R == | |||
:::: Well then, everyone agrees. So shouldn't the citation pages be updated to make that more clear?... This style is considered deprecated. When I see this style I amost always replace it with this one<ref>http://www.google.com</ref>, but I wasn't sure that was the ''absolutely correct'' thing to do. Now that we all agree, should the ] pages be more clear in steering editors in the correct direction for citation styles? Thanks again. ] 14:10, 25 August 2007 (UTC) | |||
:::::''I'' think so, and have always thought so, but that is just my opinion. I never liked using embedded URLs and I have since returned to old articles and changed any that I can find. But therein lies part of the reason for their continuation; they are very easy for new editors to grasp. Not that footnotes are ''hard'' at all, but a simple URL link means one less thing to learn when someone (like me) comes along and says "you know, you should really cite all that". I would not mind seeing the embedded URL system given less weight, at the least, though I doubt that we would readily reach consensus in favour of that. We won't see it given the elbow altogether any time soon. '''''] ]''''' 16:19, 25 August 2007 (UTC) | |||
<!-- ] 02:01, 28 February 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1740708070}} | |||
:::::: I agree that it's not hard at all, yet I also agree that URLs are easier for first-time editors to understand. I'm simply suggesting that comments are made on the ] pages that say something like, ''you could use this URL refs, but the preferred method is the ref style...''. Isn't that worth it? ] 16:36, 25 August 2007 (UTC) | |||
{{rfc|policy|rfcid=E403F0F}} | |||
:::::Yes, it is. You already had my full agreement about that. Propose it at ] and see what response it gets. '''''] ]''''' 17:20, 25 August 2007 (UTC) | |||
Should ] be amended as follows: | |||
== placement of citation == | |||
{{td|A page can be ] if there is a suitable page to redirect to, and if the resulting redirect is not ]. If the change is disputed via a ], an attempt should be made to reach a ] before blank-and-redirecting again. Suitable venues for doing so include the article's talk page and ].|A page can be ] if there is a suitable page to redirect to, and if the resulting redirect is not ]. If the change is disputed, such as by ], an attempt should be made to reach a ] before blank-and-redirecting again. The proper venue for doing so is ], although sometimes the dispute may be resolved on the article's talk page.}} | |||
What is the policy about citation location? Should one put it on a phrase, on the sentence, at the end of a paragraph? Suppose two sentences in a paragraph have the same source? Can the citation be at the end of the second sentence?--] 16:39, 25 August 2007 (UTC) | |||
:See ] and its talk page. If a citation refers specifically to ''one part'' of a sentence and is not relevant to the rest of that sentence, then place it in the middle as needed; otherwise, place it at the end. Two or more facts in one sentence or paragraph that come from the exact same reference source can be cited with just one reference. For the sake of readability, it should go after punctuation. Some folk cite reasons of style and convention for placing them before punctuation, but easy readability is more important. '''''] ]''''' 17:18, 25 August 2007 (UTC) | |||
Prior discussion: ] | |||
== Sources and no original research == | |||
=== Support === | |||
I have proposed a replacement for the ] section of ] at ]. The proposal focuses on what sources should be relied upon and how to handle other references, in relation to original research. Cheers! ] 16:45, 25 August 2007 (UTC) | |||
* As proposer. This reflects ] and current practice. Blanking of article content should be discussed at AfD, not another venue. If someone contests a BLAR, they're contesting the fact that article content was removed, not that a redirect exists. The venue matters because different sets of editors patrol AfD and RfD. ] (]/]) 01:54, 24 January 2025 (UTC) | |||
* Summoned by bot. I ''broadly'' support this clarification. However, I think it could be made even clearer that, in lieu of an AfD, if a consensus on the talkpage emerges that it should be merged to another article, that suffices and reverting a BLAR doesn't change that consensus without good reason. As written, I worry that the interpretation will be "if it's contested, it ''must'' go to AfD". I'd recommend the following: {{tq|This may be done through either a merge discussion on the talkpage that results in a clear consensus to merge. Alternatively, or if a clear consensus on the talkpage does not form, the article should be submitted through Articles for Deletion for a broader consensus to emerge.}} That said, I'm not so miffed with the proposed wording to oppose it. -bɜ:ʳkənhɪmez | ] | ] 02:35, 24 January 2025 (UTC) | |||
*:I don't see this proposal as precluding a merge discussion. ] (]/]) 02:46, 24 January 2025 (UTC) | |||
*::I don't either, but I see the wording of {{tq|although sometimes the dispute may be resolved on the article's talk page}} closer to "if the person who contested/reverted agrees on the talk page, you don't need an AfD" rather than "if a consensus on the talk page is that the revert was wrong, an AfD is not needed". The second is what I see general consensus as, not the first. -bɜ:ʳkənhɪmez | ] | ] 02:53, 24 January 2025 (UTC) | |||
* I broadly support the idea, an AFD is going to get more eyes than an obscure talkpage, so I suspect it is the better venue in ''most'' cases. I'm also unsure how to work this nuance in to the prose, and not suspect the rare cases where another forum would be better, such a forum might emerge anyway. ] (]) 03:28, 24 January 2025 (UTC) | |||
* '''Support''' per my extensive comments in the prior discussion. ] (]) 11:15, 24 January 2025 (UTC) | |||
=== Oppose === | |||
== Dealing with templates in user signatures == | |||
=== Discussion === | |||
] states clearly that users should not transclude templates in signatures, but makes no mention of what to do when you find an unsuitable signature that transcludes a sub-page. Is ] the best route to take? ] and the MFD page do not make any specific mention of this unless I have missed something, and ] looks unsuitable for this. '''''] ]''''' 17:13, 25 August 2007 (UTC) | |||
*not entirely sure i should vote, but i should probably mention ] that preceded the one about atd-r, and i do think this rfc should affect that as well, but wouldn't be surprised if it required another one '''] <sub>] ]</sub>''' 12:38, 24 January 2025 (UTC) |
Latest revision as of 12:38, 24 January 2025
Page for discussing policies and guidelines"WP:VPP" redirects here. For proposals, see Misplaced Pages:Village pump (proposals).Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
- If you wish to propose something new that is not a policy or guideline, use Village pump (proposals). Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page.
- For questions about how to apply existing policies or guidelines, refer to one of the many Misplaced Pages:Noticeboards.
- If you want to inquire about what the policy is on a specific topic, visit the Help desk or the Teahouse.
- This is not the place to resolve disputes regarding the implementation of policies. For such cases, consult Misplaced Pages:Dispute resolution.
- For proposals for new or amended speedy deletion criteria, use Misplaced Pages talk:Criteria for speedy deletion.
Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after two weeks of inactivity.
« Archives, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199 Centralized discussion- Prohibiting the creation of new "T:" pseudo-namespace redirects
- Refining the administrator elections process
- Blocks for promotional activity outside of mainspace
RfC: Voluntary RfA after resignation
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is clear consensus that participants in this discussion wish to retain the "Option 2" status quo. We're past 30 days of discussion and there's not much traffic on the discussion now. It's unlikely the consensus would suddenly shift with additional discussion. --Hammersoft (talk) 18:29, 16 January 2025 (UTC)
Should Misplaced Pages:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: There is an ongoing related discussion at Misplaced Pages:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC).
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- Notified: Misplaced Pages:Administrators' noticeboard, Misplaced Pages:Bureaucrats' noticeboard, Misplaced Pages talk:Administrators, Misplaced Pages talk:Requests for adminship, T:CENT. voorts (talk/contributions) 21:19, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 21:22, 15 December 2024 (UTC)
- Also fine with 3 charlotte 22:23, 15 December 2024 (UTC)
- There is ongoing discussion about this at Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"
- I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. BugGhost 🦗👻 11:05, 17 December 2024 (UTC)- I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. 28bytes (talk) 14:23, 17 December 2024 (UTC)
- I think this would be a great idea if it differentiated against recall RfAs. Aaron Liu (talk) 18:37, 17 December 2024 (UTC)
- If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. Thryduulf (talk) 21:00, 17 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- I would also support option 3 if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — Jkudlick ⚓ (talk) 02:05, 16 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. voorts (talk/contributions) 23:24, 16 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
- Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)
- (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)
- @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)
- 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Misplaced Pages:Backlog are always there. -- Tamzin (they|xe|🤷) 23:31, 15 December 2024 (UTC)
- 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)
- Option 1.
No second preference between 2 or 3.As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)- After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)
- Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)
- Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)
- 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)
- Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)
- Update: Option 2 seems to be the consensus and I also would be fine with that. Gnomingstuff (talk) 18:10, 19 December 2024 (UTC)
- Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)
- As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)
- I've seen this assertion made multiple times now that
requires a lot of time from the community
, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- BN Is designed for this exact scenario. It's also clearly a less contentious process.
- Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
- If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
- The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
- Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)
- Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
- Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
- Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
- Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
- None have
- Realise your input is not necessary and this could have been done at BN
- This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. Aaron Liu (talk) 12:52, 18 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? Thryduulf (talk) 09:08, 17 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- I've seen this assertion made multiple times now that
- Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 06:23, 16 December 2024 (UTC)
- Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike 07:24, 16 December 2024 (UTC)
- Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)
- Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)
- Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. ♠JCW555 (talk)♠ 09:13, 16 December 2024 (UTC)
- Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly,
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC) - Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)
- Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)
- Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)
- Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)
- Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)
- 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)
- 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)
- 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux 18:03, 16 December 2024 (UTC)
- Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. Thryduulf (talk) 23:30, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)
- 2. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it have to be in a recall petition? --Floquenbeam (talk) 23:44, 16 December 2024 (UTC)
- What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, Barkeep49 (talk) 03:38, 17 December 2024 (UTC)
- Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --Tryptofish (talk) 23:58, 18 December 2024 (UTC)
- No hard rule, but we should socially discourage confirmation RfAs There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?A lot of people have thrown around editor time in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. CaptainEek ⚓ 03:29, 17 December 2024 (UTC)
- And for what its worth, support Option 3 because I'm generally a fan of putting more tools in people's toolboxes. CaptainEek ⚓ 03:36, 17 December 2024 (UTC)
In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.
Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, is a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.- The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. Thryduulf (talk) 09:16, 17 December 2024 (UTC)
- WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —Kusma (talk) 09:53, 17 December 2024 (UTC)
- @CaptainEek Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
- As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. Soni (talk) 18:41, 19 December 2024 (UTC)
- Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. CaptainEek ⚓ 16:49, 23 December 2024 (UTC)
- Option 1. Will prevent the unnecessary drama trend we are seeing in the recent. – Ammarpad (talk) 07:18, 17 December 2024 (UTC)
- Option 2 if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ~~ AirshipJungleman29 (talk) 15:08, 17 December 2024 (UTC)
- Option 3 (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is real and substantial concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). Bobby Cohn (talk) 17:26, 17 December 2024 (UTC)
- Option 3 or Option 2. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. Soni (talk) 17:35, 17 December 2024 (UTC)
- Option 2: Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- SashiRolls 18:48, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). Aaron Liu (talk) 12:56, 18 December 2024 (UTC)
- One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. Thryduulf (talk) 21:06, 17 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into this without good reason. Aaron Liu (talk) 12:57, 18 December 2024 (UTC)- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. Aaron Liu (talk) 16:29, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Option 2. In fact, I'm inclined to encourage an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --Pinchme123 (talk) 21:45, 17 December 2024 (UTC)
- Option 2 WP:RFA has said "
Former administrators may seek reinstatement of their privileges through RfA...
" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. Andrew🐉(talk) 22:14, 17 December 2024 (UTC) - Option 2. People who think it’s not a good use of their time always have the option to scroll past. Innisfree987 (talk) 01:41, 18 December 2024 (UTC)
- 2 - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. Jehochman 14:00, 18 December 2024 (UTC)
- Option 1. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. Risker (talk) 02:34, 19 December 2024 (UTC)
- Option 2. If people want to choose to go through an RFA, who are we to stop them? Stifle (talk) 10:25, 19 December 2024 (UTC)
- Option 2 (status quo/no changes) per meh. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. Ivanvector (/Edits) 19:34, 19 December 2024 (UTC)
- Option 2 I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. 184.152.68.190 (talk) 02:28, 20 December 2024 (UTC)
- Option 1, per WP:POINT, WP:NOT#SOCIALNETWORK, WP:NOT#BUREAUCRACY, WP:NOTABOUTYOU, and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. — SMcCandlish ☏ ¢ 😼 04:28, 22 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
RfA invoked as a "Show me some love!" abuse of process
, you're arguing against a strawman. Thryduulf (talk) 11:41, 22 December 2024 (UTC)- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. Aaron Liu (talk) 15:37, 23 December 2024 (UTC)or because you've somehow confused yourself into think it needs to be done
- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
- Option 2. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins already choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates WP:POINT, because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general is a violation of WP:POINT. – Epicgenius (talk) 14:46, 22 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. CaptainEek ⚓ 16:47, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- Option 2 or 3 per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. Aaron Liu (talk) 15:44, 23 December 2024 (UTC)
- Option 2 but for largely the reasons presented by CaptainEek. KevinL (aka L235 · t · c) 21:58, 23 December 2024 (UTC)
- Option 2 (fine with better labeling) These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. RevelationDirect (talk) 22:36, 23 December 2024 (UTC)
- Option 1 because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. Therapyisgood (talk) 23:05, 23 December 2024 (UTC)
- If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. Thryduulf (talk) 23:28, 23 December 2024 (UTC)
- Option 2 per QoH (or me? who knows...) Kline • talk • contribs 04:24, 27 December 2024 (UTC)
- Option 2 Just because someone may be entitled to get the bit back doesn't mean they necessarily should. Look at my RFA3. I did not resign under a cloud, so I could have gotten the bit back by request. However, the RFA established that I did not have the community support at that point, so it was a good thing that I chose that path. I don't particularly support option 3, but I could deal with it. --SarekOfVulcan (talk) 16:05, 27 December 2024 (UTC)
- Option 1 Asking hundreds of people to vet a candidate who has already passed a RfA and is eligible to get the tools back at BN is a waste of the community's time. -- Pawnkingthree (talk) 16:21, 27 December 2024 (UTC)
- Option 2 Abolishing RFA in favour of BN may need to be considered, but I am unconvinced by arguments about RFA being a waste of time. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- Option 2 I really don't think there's a problem that needs to be fixed here. I am grateful at least a couple administrators have asked for the support of the community recently. SportingFlyer T·C 00:12, 29 December 2024 (UTC)
- Option 2. Keep the status quo of
any editor is free to re-request the tools through the requests for adminship process
. Voluntary RfA are rare enough not to be a problem, it's not as though we are overburdened with RfAs. And it’s my time to waste. --Malcolmxl5 (talk) 17:58, 7 January 2025 (UTC) - Option 2 or Option 3. These are unlikely to happen anyway, it's not like they're going to become a trend. I'm already wasting my time here instead of other more important activities anyway, so what's a little more time spent giving an easy support?fanfanboy (blocktalk) 16:39, 10 January 2025 (UTC)
- Option 1 Agree with Daniel Quinlan that for the problematic editors eligible for re-sysop at BN despite unpopularity, we should rely on our new process of admin recall, rather than pre-emptive RRFAs. I'll add the novel argument that when goliaths like Hog Farm unnecessarily showcase their achievements at RFA, it scares off nonetheless qualified candidates. ViridianPenguin 🐧 ( 💬 ) 17:39, 14 January 2025 (UTC)
- Option 2 per Gnoming /CaptainEeek Bluethricecreamman (talk) 20:04, 14 January 2025 (UTC)
- Option 2 or Option 3 - if you regard a re-RfA as a waste of your time, just don't waste it by participating; it's not mandatory. Bastun 12:13, 15 January 2025 (UTC)
Discussion
- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC)- That is not actually true. ArbCom can (and has) forbidden some editors from re-requesting the tools through RFA. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)
- Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)
- Regarding option 3, establishing a consensus view takes patience. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. isaacl (talk) 17:44, 17 December 2024 (UTC)
- In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)
- What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)
- I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. Thryduulf (talk) 23:20, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)
- This apparent negative feeling evoked at an RFA for a former sysop everyone agrees is fully qualified and trusted certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA without first visiting BN. This comes in the wake of Worm That Turned's similar rerun. BusterD (talk) 23:29, 16 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- No argument from me. I was a big Hog Farm backer way back when he was merely one of Misplaced Pages's best content contributors. BusterD (talk) 12:10, 17 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- All these mentions of editor time make me have to mention The Grand Unified Theory of Editor Time (TLDR: our understanding of how editor time works is dreadfully incomplete). CaptainEek ⚓ 02:44, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. CMD (talk) 01:05, 22 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- @CaptainEek I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. Worm(talk) 10:28, 23 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here
. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. CaptainEek ⚓ 16:59, 23 December 2024 (UTC)- I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) isaacl (talk) 05:19, 24 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
- I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. BugGhost 🦗👻 12:04, 23 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". Alpha3031 (t • c) 12:40, 19 December 2024 (UTC)
- Can someone accurately describe for me what the status quo is? I reread this RfC twice now and am having a hard time figuring out what the current state of affairs is, and how the proposed alternatives will change them. Duly signed, ⛵ WaltClipper -(talk) 14:42, 13 January 2025 (UTC)
- Option 2 is the status quo. The goal of the RFC is to see if the community wants to prohibit reconfirmation RFAs (option 1). The idea is that reconfirmation RFAs take up a lot more community time than a BN request so are unnecessary. There were 2 reconfirmation RFAs recently after a long dry spell. –Novem Linguae (talk) 20:49, 13 January 2025 (UTC)
- The status quo, documented at Misplaced Pages:Administrators#Restoration of admin tools, is that admins who resigned without being under controversy can seek readminship through either BN (where it's usually given at the discreetion of an arbitrary bureaucrat according to the section I linked) or RfA (where all normal RfA procedures apply, and you see a bunch of people saying "the candidate's wasting the community's time and could've uncontroversially gotten adminship back at BN instead). Aaron Liu (talk) 12:27, 14 January 2025 (UTC)
Guideline against use of AI images in BLPs and medical articles?
I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?
To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)
- What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)
- Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)
- I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)
- I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)
- There hasn't been a full discussion yet, and we have a list of uses at Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)
- Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)
- Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)
- There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)
- We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)
- I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)
- Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)
- I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)
- For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
- I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin° 19:12, 30 December 2024 (UTC)
- I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)
- Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux 19:26, 30 December 2024 (UTC)
- I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)
- I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)
- AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
always end up with "no consensus" and no guidelines on use at all, even if most people are against it
Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)
- We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
- That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)
- Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop
others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)
- I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
- Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
- Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
- The potential harm I mentioned above is twofold, firstly Misplaced Pages is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
- Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)
- I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article.
That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware.
In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)- Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.- Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image)
. There are only two possible scenarios regarding verifiability:- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- Verifiability is no barrier to using the image, whether it is AI generated or not.
- If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
- The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
- The only reasons we should ever use the image are:
- It has been the subject of notable commentary and we are presenting it in that context.
- The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
- This is already policy, whether the image is AI generated or not is completely irrelevant.
- The only reasons we should ever use the image are:
- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
- If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
- I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images on Misplaced Pages. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
;-)
- If an editor were to use an AI-based image-generating service and the prompt is something like this:
- "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
- 2014–15: played 34 games, won 25, tied 4, lost 5
- 2015–16: played 34 games, won 28, tied 4, lost 2
- 2016–17: played 34 games, won 25, tied 7, lost 2
- 2017–18: played 34 games, won 27, tied 3, lost 4
- 2018–19: played 34 games, won 24, tied 6, lost 4
- 2019–20: played 34 games, won 26, tied 4, lost 4
- 2020–21: played 34 games, won 24, tied 6, lost 4
- 2021–22: played 34 games, won 24, tied 5, lost 5
- 2022–23: played 34 games, won 21, tied 8, lost 5
- 2023–24: played 34 games, won 23, tied 3, lost 8"
- I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
- We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
We're discussing generating images of people, places, and objects here
The proposal contains no such limitation.and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH.
Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
- A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
- (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)
- Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- Yes, which explicitly states:
- It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
- Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)
- Yes, which explicitly states:
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥ 论 07:00, 31 December 2024 (UTC)
- @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
- I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)
- As you know, Misplaced Pages has the unique factor of being entirely volunteer-ran. Misplaced Pages has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Misplaced Pages editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
- In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
- Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
- As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Misplaced Pages readers and Misplaced Pages editors alike.
- Misplaced Pages is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
- A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A[REDACTED] editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
- As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Misplaced Pages itself).
- I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Misplaced Pages.
- Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Misplaced Pages in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
- If people can't be confident that Misplaced Pages is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
- Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
- 88% of respondents use at least one CAT tool for at least some of their translation tasks.
- Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
- Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)
- You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)
- "all machine translated material must be thoroughly checked and modified by, yes, human translators"
- You are just agreeing with me here.
- There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Misplaced Pages article?" The question here is not "Shall we put AI-generating buttons on Misplaced Pages's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)
- I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
- I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
- But I'm not worried about a Misplaced Pages editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Misplaced Pages editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)
- Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)
Translators are not using generative AI for translation
this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)- Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A[REDACTED] editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥ 论 06:53, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--♦IanMacM♦ 08:20, 31 December 2024 (UTC)
- Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)
- @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--♦IanMacM♦ 08:20, 31 December 2024 (UTC)
- There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
- I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)
- I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)
- Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Misplaced Pages will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)
- For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)
- Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
- Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 u — c 🎄 20:08, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
- I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 u — c 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 u — c 🎄 20:56, 31 December 2024 (UTC)
- Determining what benefits any image brings to Misplaced Pages can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
- The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Misplaced Pages's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. —pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Misplaced Pages images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)
- Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade 00:29, 1 January 2025 (UTC)
- Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)
- So, you expect an the AI, notoriously trained on Misplaced Pages (and whatever else is floating around on the internet), to correct Misplaced Pages where humans have failed... using the data it scraped from Misplaced Pages (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
- I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
- So, you expect an the AI, notoriously trained on Misplaced Pages (and whatever else is floating around on the internet), to correct Misplaced Pages where humans have failed... using the data it scraped from Misplaced Pages (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology |
---|
To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:
|
- It was quite transparent in listing and citing the sources that it used for its analysis. These included the Misplaced Pages image but if one didn't want that included, it would be easy to exclude it.
- So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Misplaced Pages. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
- Andrew🐉(talk) 09:09, 2 January 2025 (UTC)
- They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Misplaced Pages is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)
- While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)
- Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)
- Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Misplaced Pages. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)
- And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)
- Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
if if changes the image
), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)- I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
- Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)
- Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)
- Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)
- Support a blanket ban to assure some control over AI-creep in Misplaced Pages. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)
- Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)
- As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Misplaced Pages, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Misplaced Pages editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)
- Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)
- Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Misplaced Pages is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creep 20:11, 5 January 2025 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)
- Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)
- Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Misplaced Pages should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)
- Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view
have no legitimate encyclopedic function whatsoever
. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
have no legitimate encyclopedic function whatsoever.
This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
- "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)
- Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
- Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
BLPs
CONSENSUS AGAINST There is clear consensus against using AI-generated imagery to depict BLP subjects. Marginal cases (such as major AI enhancement or where an AI-generated image of a living person is itself notable) can be worked out on a case-by-case basis. I will add a sentence reflecting this consensus to the image use policy and the BLP policy. —Ganesha811 (talk) 14:02, 8 January 2025 (UTC)The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora,
a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.Some1 (talk) 12:34, 31 December 2024 (UTC)
03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).
Some1 (talk) 11:10, 3 January 2025 (UTC)notified: Misplaced Pages talk:Biographies of living persons, Misplaced Pages talk:No original research, Misplaced Pages talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)
- No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)
- That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)
- There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)
- No. Well, that was easy.They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 u — c 🎄 20:00, 31 December 2024 (UTC)
- Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (u — c) 19:17, 2 January 2025 (UTC)
- No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)
- No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. —pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)
- No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)
- Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)
- How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
- How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 u — c 🎄 21:54, 31 December 2024 (UTC)
How well can we determine how accurate a representation it is?
in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)
- A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust". And even if you disagree with all that, can you not see the larger problem of AI images on Misplaced Pages getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)
- I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
- I think we're Misplaced Pages:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)
- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Misplaced Pages, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)
- No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)
- Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)
- No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)
- No. We don't permit falsifications in BLPs. Seraphimblade 00:30, 1 January 2025 (UTC)
- For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade 05:42, 3 January 2025 (UTC)
- No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)
- No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)
- No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)
- No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
- No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)
- So you just said a portrait can be used because[REDACTED] tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
- However, I really want to stick to what you say at the end there:
Heck, most AI looks closer to the real thing than any portrait.
- That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.
- Per the wording of the RfC of "
depict BLP subjects
," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)
- So you just said a portrait can be used because[REDACTED] tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)
- Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)
- People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (u — c) 14:15, 2 January 2025 (UTC)
- Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
...human is not going to change or distort a person's appearance in the same way an AI image would. done by a person who is paying attention to what they are doing by person who is aware, while they are making , that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator.
Cremastra (u — c) 20:56, 1 January 2025 (UTC)- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person.
Cremastra (u — c) 00:16, 2 January 2025 (UTC)- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (u — c) 02:38, 2 January 2025 (UTC)
- Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)
- Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
- I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (u — c) 15:30, 2 January 2025 (UTC)
- Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)
- Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Comment: when you Google search someone (at least from the Chrome browser), often the link to the Misplaced Pages article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)
- This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)
- Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
- Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)
- No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)
- I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery. That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)
- The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
- We're here to build an encyclopedia, not to protect commercial search engine companies.
- I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)
- You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)
- As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Misplaced Pages and especially on their own biography. WP:BLP says the bios
must be written conservatively and with regard for the subject's privacy.
Some1 (talk) 18:37, 3 January 2025 (UTC) Once we can no longer tell the difference, what's the point in banning them?
Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (u — c) 18:47, 3 January 2025 (UTC)
- If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
Oppose.Yes. I echo my comments from the other day regarding BLP illustrations:
lethargilistic (talk) 15:41, 1 January 2025 (UTC)What this conversation is really circling around is banning entire skillsets from contributing to Misplaced Pages merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Misplaced Pages. Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
Additionally, referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Misplaced Pages is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)
- Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
- A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
- Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.
. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Misplaced Pages. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy:If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH.
Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)
- That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune 19:05, 1 January 2025 (UTC)
- No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)
- No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)
- No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios (
"Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant"
is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is). - If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)
we should be steering clear of copyvio
we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.if people upload faked images the response should be as it is now
in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)- The idea that
current policies are entirely adequate
is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- "
in other words you are saying that the problem is faked images not AI
" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt. - "
at least some AI images are legally acceptable for us
" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)- Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (u — c) 19:15, 2 January 2025 (UTC)
- "
- Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)
- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- The idea that
- No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)
- Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)
- No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Misplaced Pages, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talk • contribs) 15:25, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Misplaced Pages. ModernDayTrilobite (talk • contribs) 19:13, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
- Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Misplaced Pages. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Misplaced Pages against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin° 18:17, 2 January 2025 (UTC)
- No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)
- No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)
- No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)
- No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Misplaced Pages link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)
- I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)
- A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
- DS (talk) 02:44, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Misplaced Pages. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Misplaced Pages, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Misplaced Pages when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Misplaced Pages, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Misplaced Pages. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Misplaced Pages, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talk • contribs) 05:57, 3 January 2025 (UTC)
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
- These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Misplaced Pages saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Misplaced Pages to reliable sources, and the contents of Misplaced Pages are considered verifiable if they cohere.
I ask again: If Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Misplaced Pages, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)"Verifiable by comparing them to a reliable source"
- comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing."Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake.""
- Try presenting a paraphrasing as a quotation and see what happens."Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..."
- This basically happened, and is the origin of WP:NOTGALLERY. Misplaced Pages is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)Comparing two images and saying that one looks like the other is not "verifying" anything.
Comparing text to text in a reliable source is literally the same thing.The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.Try presenting a paraphrasing as a quotation and see what happens.
Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)This basically happened, and is the origin of WP:NOTGALLERY.
That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Misplaced Pages is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (u — c) 23:18, 7 January 2025 (UTC)
- You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
- But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
- Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)
- (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Misplaced Pages image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article: Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)
- Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
- (this isn't even a good example, it looks more like Steve Bannon)
- Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)
- Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)
- Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. —pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)
- I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)
- No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)
- No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talk • contribs) 05:44, 3 January 2025 (UTC)
- Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)
- No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)
- No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (u — c) 15:03, 3 January 2025 (UTC)
- I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)
- I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)
- No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)
- Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)
- Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Misplaced Pages is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)
- Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)
- The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)
- No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No. Misplaced Pages is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)
- No. Generative AI may have its place, and it may even have a place on Misplaced Pages in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)
- No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)
- No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)
There's no guarantee the images will actually look like the person in question
there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)
- Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)
- This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)
- No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)
- Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC) This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)
- No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)
- No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)
- No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)
- No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Misplaced Pages, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)
- No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creep 20:19, 5 January 2025 (UTC)
- No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)
- No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)
- No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)
- No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)
- No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)
- No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)
- No – WP:NFC says that
Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people.
While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC) - No, AI images should not be permitted on Misplaced Pages at all. Stifle (talk) 11:27, 8 January 2025 (UTC)
Expiration date?
"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)
- An end date is a positive suggestion. Consensus systems like Misplaced Pages's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Misplaced Pages goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)
- Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)
- Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Misplaced Pages should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)
- WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)
- No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
- Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)
Should WP:Demonstrate good faith include mention of AI-generated comments?
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.
Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)
- Yes, I think this is a good idea. :bloodofox: (talk) 00:39, 2 January 2025 (UTC)
- No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)
- Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)
- I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (u — c) 14:31, 2 January 2025 (UTC)
- I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- I think bloodofox's comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)
- Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Misplaced Pages to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
- I'm not mocking anybody, nor am I advocating to
let chatbots run rampant
. I'm utterly confused why you think I might advocate for selling Misplaced Pages to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- No, this is not a
everyone else is the problem, not me
issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue. - I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
- AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)
- In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Misplaced Pages's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
- In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
- It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)
LLMs don't understand Misplaced Pages's policies and norms
They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Misplaced Pages does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Misplaced Pages. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, ⛵ WaltClipper -(talk) 14:33, 15 January 2025 (UTC)
- No, this is not a
- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is simply- FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Misplaced Pages in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is factually incorrect.- FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Misplaced Pages. Thryduulf (talk) 14:52, 14 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (u — c) 02:35, 2 January 2025 (UTC)
- Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and
AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC)- Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)
- "why does it matter if it was AI generated or not?"
- Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
- "How will they be enforceable? "
- WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)
- Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)
- Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "offering new insights or advancing scholarly understanding" and "merely" reiterating what other sources have written.
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
- "Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
- The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)
- I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
- But... do you actually think they're doing this for the purpose of intentionally harming Misplaced Pages? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)
- The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
- Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)
- Misplaced Pages:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
- But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful."
- So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Misplaced Pages. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)
- Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- Sure, I'd count that as a case of "trying to hurt Misplaced Pages-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)
- Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- Misplaced Pages:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)
- The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- This is just semantics.
- For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
- The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. Photos of Japan (talk) 01:09, 3 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)
- Comment I have no opinion on this matter, however, note that we are currently dealing with a real-world application of this at ANI and there's a generalized state of confusion in how to address it. Chetsford (talk) 08:54, 2 January 2025 (UTC)
- Yes I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. Simonm223 (talk) 14:34, 2 January 2025 (UTC)
- Yes, mention that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. Selfstudier (talk) 14:43, 2 January 2025 (UTC)
- I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". WhatamIdoing (talk) 07:57, 3 January 2025 (UTC)
- No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)
- Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Misplaced Pages. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
- I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
... but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages
: That problem existed with some humans even prior to LLMs. —Bagumba (talk) 02:53, 20 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)
- Yes Using a 3rd party service to contribute to the Misplaced Pages on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)
- No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)
- There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
- We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
- The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Misplaced Pages values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)
- No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. —pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)
- No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)
- Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.
- It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith"
, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
- Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)
- Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseulttalk to me 01:26, 10 January 2025 (UTC)
- Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)
- No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
using AI
should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)- @Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, ⛵ WaltClipper -(talk) 14:08, 14 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
- No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, ⛵ WaltClipper -(talk) 14:43, 13 January 2025 (UTC)
Extended content |
---|
I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others.
I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith. When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them. It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it. Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. |} It's easy to detect for most, can be pointed out as needed. No need to add an extra policy JayCubby |
Allowing non-admin "delete" closures at RfD
At Misplaced Pages:Deletion review#Clock/calendar, a few editors (Enos733 and Jay, while Robert McClenon and OwenX hinted at it) expressed support for allowing non-administrators to close RfD discussions as "delete". While I don't personally hold strong opinions in this regard, I would like for this idea to be discussed here. JJPMaster (she/they) 13:13, 7 January 2025 (UTC)
- That would not be helpful. -- Tavix 14:10, 7 January 2025 (UTC)
- While I have no issue with the direction the linked discussion has taken, I agree with almost every contributor there: As a practice I have zero interest in generally allowing random editors closing outside their permissions. It might make DRV a more chatty board, granted. BusterD (talk) 15:02, 7 January 2025 (UTC)
- Tamzin makes a reasonable case in their comment below. When we have already chosen to trust certain editors with advanced permissions, we might allow those folks to utilize them as fully as accepted practice allows. Those humans already have skin in the game. They are unlikely to act rashly. BusterD (talk) 19:32, 7 January 2025 (UTC)
- To me, non-admin delete closes at any XfD have always seemed inconsistent with what we say about how adminship and discussion closing work. I would be in violation of admin policy if I deleted based on someone else's close without conducting a full review myself, in which case, what was the point of their close? It's entirely redundant to my own work. That said, I can't really articulate a reason that this should be allowed at some XfDs but not others, and it seems to have gone fine at CfD and TfD. I guess call me neutral. What I'd be more open to is allowing page movers to do this. Page movers do have the tools to turn a bluelink red, so it doesn't create the same admin accountability issue if I'm just cleaning up the stray page left over from a page mover's use of a tool that they were duly granted and subject to their own accountability rules for. We could let them move a redirect to some other plausible title (this would violate WP:MOVEREDIRECT as currently written but I think I'd be okay with making this a canonical exception), and/or allow moving to some draftspace or userspace page and tagging for G6, as we do with {{db-moved}}. I'll note that when I was a non-admin pagemover, I did close a few things as delete where some edge case applied that let me effect the deletion using only suppressredirect, and no one ever objected. -- Tamzin (they|xe|🤷) 19:07, 7 January 2025 (UTC)
- I see that I was sort of vague, which is consistent with the statement that I hinted at allowing non-admin delete closures. My main concern is that I would like to see our guidelines and our practice made consistent, either by changing the guidelines or changing the practice. It appears that there is a rough consensus emerging that non-admin delete closures should continue to be disallowed in RFD, but that CFD may be a special case. So what I am saying is that if, in practice, we allow non-admin Delete closures at CFD, the guideline should say something vague to that effect.
- I also see that there is a consensus that DRV can endorse irregular non-admin closures, including irregular non-admin Delete closures. Specifically, it isn't necessary for DRV to vacate the closure for an uninvolved admin to close. A consensus at DRV, some of whose editors will be uninvolved admins, is at least as good a close as a normal close by an uninvolved admin.
- Also, maybe we need clearer guidance about non-admin Keep closures of AFDs. I think that if an editor is not sure whether they have sufficient experience to be closing AFDs as Keep, they don't have enough experience. I think that the guidance is clear enough in saying that administrator accountability applies to non-admin closes, but maybe it needs to be further strengthened, because at DRV we sometimes deal with non-admin closes where the closer doesn't respond to inquiries, or is rude in response to them.
- Also, maybe we need clearer guidance about non-admin No Consensus closures of AFDs. In particular, a close of No Consensus is a contentious closure, and should either be left to an admin, or should be Relisted.
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- As for
I can't really articulate a reason that this should be allowed at some XfDs
, the argument is that more work is needed to enact closures at TfD and CfD (namely orphaning templates and emptying/moving/merging categories). Those extra steps aren't present at RfD. At most, there are times when it's appropriate to unlink the redirect or add WP:RCATs but those are automated steps that WP:XFDC handles. From my limited experience at TfD and CfD though, it does seem that the extra work needed at closure does not compensate for the extra work from needing two people reviewing the closure (especially at CfD because a bot that handles the clean-up). Consistency has come up and I would much rather consistently disallow non-admin delete closures at all XfD venues. I know it's tempting for non-admins to think they're helping by enacting these closures but it's not fair for them to be spinning their wheels. As for moving redirects, that's even messier than deleting them. There's a reason that WP:MOVEREDIRECT advises not to move redirects except for limited cases when preserving history is important. -- Tavix 20:16, 7 January 2025 (UTC)
- As for
- @Tamzin: I do have one objection to this point of redundancy, which you are quite familiar with. Here, an AfD was closed as "transwiki and delete", however, the admin who did the closure does not have the technical ability to transwiki pages to the English Wikibooks, meaning that I, who does, had to determine that the outcome was actually to transwiki rather than blindly accepting a request at b:WB:RFI. Then, I had to mark the pages for G6 deletion, that way an admin, in this case you, could determine that the page was ready to be deleted. Does this mean that that admin who closed the discussion shouldn't have closed it, since they only have the technical ability to delete, not transwiki? Could I have closed it, having the technical ability to transwiki, but not delete? Either way, someone else would have had to review it. Or, should only people who have importing rights on the target wiki and admin rights on the English Misplaced Pages be allowed to close discussions as "transwiki and delete"? JJPMaster (she/they) 12:04, 8 January 2025 (UTC)
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- I do support being explicit when a non-administrator can close a discussion as "delete" and I think that explicitly extending to RfD and CfD is appropriate. First, there can be a backlog in both of these areas and there are often few comments in each discussion (and there is usually not the same passion as in an AfD). Second, the delete close of a non-administrator is reviewed by an administrator before action is taken to delete the link, or category (a delete close is a two-step process, the writeup and the delete action, so in theory the administrators workload is reduced). Third, non-admins do face administrator accountability for their actions, and can be subject to sanction. Fourth, the community has a role in reviewing closing decisions at DRV, so there is already a process in place to check a unexperienced editor or poor close. Finally, with many, if not most discussions for deletion the outcome is largely straight forward. --Enos733 (talk) 20:01, 7 January 2025 (UTC)
- There is currently no rule against non-admin delete closures as far as I know; the issue is the practical one that you don't have the ability to delete. However, I have made non-admin delete closures at AfD. This occurred when an admin deleted the article under consideration (usually for COPYVIO) without closing the related AfD. The closures were not controversial and there was no DRV. Hawkeye7 (discuss) 20:31, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
If an administrator has deleted a page (including by speedy deletion) but neglected to close the discussion, anyone with a registered account may close the discussion provided that the administrator's name and deletion summary are included in the closing rationale.
-- Tavix 20:37, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
- Bad idea to allow, this sort of closure is just busy work, that imposes more work on the admin that then has to review the arguments, close and then delete. Graeme Bartlett (talk) 22:05, 7 January 2025 (UTC)
- Is this the same as #Non-Admin XFD Close as Delete above? Anomie⚔ 23:04, 7 January 2025 (UTC)
- Yes, User:Anomie. Same issue coming from the same DRV. Robert McClenon (talk) 03:52, 8 January 2025 (UTC)
- (1) As I've also noted in the other discussion, the deletion process guidelines at WP:NACD do say non-admins shouldn't do "delete" closures and do recognize exceptions for CfD and TfD. There isn't a current inconsistency there between guidelines and practice.
(2) In circumstances where we do allow for non-admin "delete" closures, I would hope that the implementing admin isn't fully reviewing the discussion de novo before implementing, but rather giving deference to any reasonable closure. That's how it goes with requested move closers asking for technical help implementing a "moved" closure at WP:RM/TR (as noted at WP:RMNAC, the closure will "generally be respected by the administrator (or page mover)" but can be reverted by an admin if "clearly improper"). SilverLocust 💬 08:41, 9 January 2025 (UTC)
- Comment - A couple things to note about the CFD process: It very much requires work by admins. The non-admin notes info about the close at WT:CFD/Working, and then an admin enters the info on the CFD/Working page (which is protected) so that the bot can perform the various actions. Remember that altering a category is potentially more labour intensive than merely editing or deleting a single page - every page in that category must be edited, and then the category deleted. (There are other technical things involved, like the mess that template transclusion can cause, but let's keep it simple.) So I wouldn't suggest that that process is very useful as a precedent for anything here. It was done at a time when there was a bit of a backlog at CfD, and this was a solution some found to address that. Also - since then, I think at least one of the regular non-admin closers there is now an admin. So there is that as well. - jc37 09:14, 9 January 2025 (UTC)
- If the expectation is that an admin needs to review the deletion discussion to ensure they agree with that outcome before deleting via G6, as multiple people here are suggesting, then I'm not sure this is worthwhile. However, I have had many admins delete pages I've tagged with G6, and I have been assuming that they only check that the discussion was indeed closed as delete, and trust the closer to be responsible for the correctness of it. This approach makes sense to me, because if a non-admin is competent to close and be responsible for any other outcome of a discussion, I don't see any compelling reason they can't be responsible for a delete outcome and close accordingly. —Compassionate727 19:51, 9 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC
alternatively you should consider becoming an administrator yourself. Thryduulf (talk) 13:20, 12 January 2025 (UTC)- If you're willing to tolerate the RFA process.—S Marshall T/C 15:24, 12 January 2025 (UTC)
- In all the cases I have dealt with, the admin's reason for deletion (usually copyvio) was completely different to the issues being debated in the AfD (usually notability). The closing statement was therefore something like "Discussion is now moot due to article being deleted for <reason> by <admin>". Hawkeye7 (discuss) 20:10, 14 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
- I think most all the time, experienced closers will do a great job and that will save admin time because they will not have to construct and explain the close from scratch, but there will be some that are bad and that will be costly in time not just for the admin but for the project's goal of completing these issues and avoiding disruption. I think that lost time is still too costly, so I would oppose non-admin delete closes. (Now if there were a proposal for a process to make a "delete-only admin permission" that would be good -- such motivated specialists would likely be more efficient.) Alanscottwalker (talk) 16:44, 12 January 2025 (UTC)
- As I said at the "Non-Admin XFD Close as Delete" section, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too, especially considering RfD backlogs. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. I don't see a significant difference between non-admins closing discussions as Delete vs non-Delete. It will help making non-admins mentally prepared to advance to admin roles. Jay 💬 14:53, 14 January 2025 (UTC)
- The backlog at RFD is mostly lack of participation, not lack of admins not making closures. This would only be exacerbated if non-admins are given a reason not to !vote on discussions trending toward deletion so they can get the opportunity to close. RFD isn't as technical as CFD and TFD. In any case, any admin doing the deletion would still have to review the RFD. Except in the most obviously trivial cases, this will lead to duplicate work, and even where it doesn't (e.g. multiple !votes all in one direction), the value-add is minimal.
Modifying the first sentence of BLPSPS
FYIA discussion has been started at WT:BLP re: modifying the text of BLPSPS. FactOrOpinion (talk) 14:23, 13 January 2025 (UTC)
Upgrade MOS:ALBUM to an official guideline
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
Misplaced Pages:WikiProject_Albums/Album_article_style_advice is an essay. I've been editing since 2010, and for the entire duration of that, this essay has been referred to and used extensively, and has even guided discussions regarding ascertaining if sources are reliable. I propose that it be formally upgraded to a status as an MOS guideline parallel to MOS:MUSIC.--3family6 (Talk to me | See what I have done) 14:28, 13 January 2025 (UTC)
- I'm broadly in favor of this proposal—I looked over the essay and most of it is aligned with what seems standard in album articles—but there are a few aspects that feel less aligned with current practice, which I'd want to reexamine before we move forward with promoting this:
- The section Recording, production suggests
What other works of art is this producer known for?
as one of the categories of information to include in a recording/production section. This can be appropriate in some cases (e.g., the Nevermind article discusses how Butch Vig's work with Killdozer inspired Nirvana to try and work with him), but recommending it outright seems like it'd risk encouraging people to WP:COATRACK. My preference would be to cut the sentence I quoted and the one immediately following it. - The section Track listing suggests that the numbered-list be the preferred format for track listings, with other formats like {{Track listing}} being alternative choices for "more complicated" cases. However, in my experience, using {{Track listing}} rather than a numbered list tends to be the standard. All of the formatting options currently listed in the essay should continue to be mentioned, but I think portraying {{Track listing}} as the primary style would be more reflective of current practice.
- The advice in the External links section seems partially outdated. In my experience, review aggregators like Metacritic are conventionally discussed in the "Critical reception" section instead these days, and I'm uncertain to what extent we still link to databases like Discogs even in ELs.
- The section Recording, production suggests
- (As a disclaimer, my familiarity with album articles comes mostly from popular-music genres, rock and hip-hop in particular. I don't know if typical practice is different in areas like classical or jazz.) Overall, while I dedicated most of my comment volume to critiques, these are a fairly minor set of issues in what seems like otherwise quite sound guidance. If they're addressed, it's my opinion that this essay would be ready for prime time. ModernDayTrilobite (talk • contribs) 15:19, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- They came to WT:ALBUMS upset that another editor was changing track lists from "numbered" to "template" formats. My main response was surprised, because in my 15+ years of article creations and rewrites, I almost exclusively used the tracklist template, and had never once received any pushback.
- So basically, I personally agree with you and MDT above, I'm merely saying I've heard someone disagree. I'll try to dig up the discussion. Sergecross73 msg me 17:50, 14 January 2025 (UTC)
- I found this one from about a year ago, though this was more about sticking to the current wording as is than it was about opposition against changing it. Not sure if there was another one or not. Sergecross73 msg me 18:14, 14 January 2025 (UTC)
- I remember one editor being strongly against the template, but they are now community banned. Everyone else I've seen so far uses the template. AstonishingTunesAdmirer 連絡 22:25, 13 January 2025 (UTC)
- I can see the numbered-list format being used for very special cases like Guitar Songs, which was released with only two songs, and had the same co-writers and producer. But I imagine we have extremely few articles that are like that, so I believe the template should be the standard. Elias 🦗🐜 12:23, 14 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- ModernDayTrilobite, regarding linking to Discogs, some recent discussions I was in at the end of last year indicate that it is common to still link to Discogs as an EL, because it gives more exhaustive track, release history, and personnel listings that Misplaced Pages - generally - should not.--3family6 (Talk to me | See what I have done) 14:14, 15 January 2025 (UTC)
- Thank you for the clarification! In that case, I've got no objection to continuing to recommend it. ModernDayTrilobite (talk • contribs) 14:37, 15 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- There were several discussions about Discogs and an RfC here. As a user of {{Discogs master}}, I agree with what other editors said there. We can't mention every version of an album in an article, so an external link to Discogs is invaluable IMO. AstonishingTunesAdmirer 連絡 22:34, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- We badly need this to become part of the MOS. As it stands, some editors have rejected the guidelines as they're just guidelines, not policies, which defeats the object of having them in the first place. Popcornfud (talk) 16:59, 13 January 2025 (UTC)
- I mean, they are guidelines, but deviation per WP:IAR should be for a good reason, not just because someone feels like it.--3family6 (Talk to me | See what I have done) 18:14, 13 January 2025 (UTC)
- I am very much in favor of this becoming an official MOS guideline per User:Popcornfud above. Very useful as a template for album articles. JeffSpaceman (talk) 21:03, 13 January 2025 (UTC)
- I recently wrote my first album article and this essay was crucial during the process, to the extent that me seeing this post is like someone saying "I thought you were already an admin" in RFA; I figured this was already a guideline. I would support it becoming one. DrOrinScrivello (talk) 02:00, 14 January 2025 (UTC)
- I have always wondered why all this time these pointers were categorized as an essay. It's about time we formalize them; as said earlier, there are some outdated things that need to be discussed (like in WP:PERSONNEL which advises not to use stores for credits, even though in the streaming era we have more and more albums/EPs that never get physical releases). Also, song articles should also have their own guidelines, IMV. Elias 🦗🐜 12:19, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 14:56, 14 January 2025 (UTC)
- Yes, I think it should be a separate, parallel guideline.--3family6 (Talk to me | See what I have done) 16:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 14:56, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I think it needs work--I recall that a former longtime album editor, Richard3120 (not pinging them, as I think they are on another break to deal with personal matters), floated a rewrite a couple of years ago. Just briefly: genres are a perennial problem, editors love unsourced exact release dates and chronology built on OR (many discography pages are sourced only to random Billboard, AllMusic, and Discogs links, rather than sources that provide a comprehensive discography), and, like others, I think all the permutations of reissue and special edition track listings has gotten out of control, as well as these long lists of not notable personnel credits (eight second engineers, 30 backing vocalists, etc.). Also agree that the track listing template issue needs consensus; if three are acceptable, then three are acceptable--again, why change it to accommodate the names of six not notable songwriters? There's still a divide on the issue of commercial links in the body of the article--I have yet to see a compelling reason for their inclusion (WP is, uh, not for sale, remember?), when a better source can always be found (and editors have noted, not that I've made a study of it, that itunes often uses incorrect release dates for older albums). But I also acknowledge that since this "floated" rewrite never happened, then the community at large may be satisfied with the guidelines. Caro7200 (talk) 13:45, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Ah, okay, I understand now.--3family6 (Talk to me | See what I have done) 16:54, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
Myself, I've noticed that some of the sourcing recommendations are contrary to WP:RS guidance (more strict, actually!) or otherwise outside consensus. For instance, MOS:ALBUMS currently says to not use vendors for track list or personnel credits, linking to WP:AFFILIATE in WP:RS, but AFFILIATE actually says that such use is acceptable but not preferred. Likewise, MOS:ALBUMS says not to use scans of liner notes, which is 1. absurd, and 2. not actual consensus, which in the discussions I've had is that actual scans are fine (which makes sense as it's a digital archived copy of the source).--3family6 (Talk to me | See what I have done) 14:05, 14 January 2025 (UTC)
- The tendency to be overreliant on liner notes is also a detriment. I've encountered some liner notes on physical releases that have missing credits (e.g. only the producers are credited and not the writers), or there are outright no notes at all. Tangentially, some physical releases of albums like Still Over It and Pink Friday 2 actually direct consumers to official websites to see the credits, which has the added problem of link rot (the credits website for Still Over It no longer works and is a permanent dead link). Elias 🦗🐜 15:04, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 15:11, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 22:59, 14 January 2025 (UTC)
- Yeah, I've found it particularly tricky when working on technical personnel (production, engineering, mixing, etc.) and songwriting credits for individuals. I usually use the liner notes (if there are any), check AllMusic and Bandcamp, and also check Tidal if necessary. But I'll also look at Spotify, too. I know they're user-generated, so I don't cite them, but I usually look at Discogs and Genius to get an idea if I'm missing something. Thank you for pointing me to Songview, that will probably also be really helpful. 3family6 (Talk to me | See what I have done) 12:50, 15 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 22:59, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 15:11, 14 January 2025 (UTC)
- (@3family6, please see WP:PROPOSAL for advice on advertising discussions about promoting pages to a guideline. No, you don't have to start over. But maybe add an RFC tag or otherwise make sure that it is very widely publicized.) WhatamIdoing (talk) 23:37, 14 January 2025 (UTC)
- Thank you. I'll notify the Manual of Style people. I did already post a notice at WP:ALBUMS. I'll inform other relevant WikiProjects as well.--3family6 (Talk to me | See what I have done) 12:46, 15 January 2025 (UTC)
Before posting the RfC as suggested by WhatamIdoing, I'm proposing the following changes to the text of MOS:ALBUM as discussed above:
- Eliminate What other works of art is this producer known for? Keep the list of other works short, as the producer will likely have their own article with a more complete list. from the "Recording, production" sub-section.
- Rework the text of the "Style and form" for tracklistings to:
- The track listing should be under a primary heading named "Track listing".
- A track listing should generally be formatted with the {{Track listing}} template. Note, however, that the track listing template forces a numbering system, so tracks originally listed as "A", "B", etc., or with other or no designations, will not appear as such when using the template. Additionally, in the case of multi-disc/multi-sided releases, a new template may be used for each individual disc or side, if applicable.
- Alternate forms, such as a table or a numbered list, are acceptable but usually not preferred. If a table is used, it should be formatted using class="wikitable", with column headings "No.", "Title" and "Length" for the track number, the track title and the track length, respectively (see Help:Table). In special cases, such as Guitar Songs, a numbered list may be the most appropriate format.
- Move Critical reception overviews like AcclaimedMusic (using {{Acclaimed Music}}), AnyDecentMusic?, or Metacritic may be appropriate as well. from "External links" to "Album ratings templates" of "Critical reception", right before the sentence about using {{Metacritic album prose}}.
- Re-write this text from "Sourcing" under "Track listing" from However, if there is disagreement, there are other viable sources. Only provide a source for a track listing if there are exceptional circumstances, such as a dispute about the writers of a certain track. Per WP:AFFILIATE, avoid commercial sources such as online stores and streaming platforms. In the rare instances where outside citations are required, explanatory text is useful to help other editors know why the album's liner notes are insufficient. to Per WP:AFFILIATE, commercial sources such as online stores and streaming platforms are acceptable to cite for track list information, but secondary coverage in independent reliable sources is preferred if available. Similarly, in the "Personnel" section, re-write Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. In some cases, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. If you need to cite these, use {{Cite AV media}} for the liner notes and do not use third party sources such as stores (per WP:AFFILIATE) or scans uploaded to image hosting sites or Discogs.com (per WP:RS). to Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. If you need to cite the liner notes, use {{Cite AV media}}. Scans of the physical media that have been uploaded in digital form to repositories or sites such as Discogs are acceptable for verification, but cite the physical notes themselves, not the user-generated transcriptions. Frequently, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. Per WP:AFFILIATE, inline citations to e-commerce or streaming platforms to verify personnel credits are allowed. However, reliable secondary sources are preferred, if available.
- Additional guidance has been suggested for researching and verifying personnel and songwriting credits. I suggest adding It is recommended to utilize a combination of the physical liner notes (if they exist) with e-commerce sites such as Apple Music and Amazon, streaming platforms such as Spotify and Tidal, and databases such as AllMusic credits listings and Songview. Finding the correct credits requires careful, case-by-case consideration and editor discretion. If you would like assistance, you can reach out to the albums or discographies WikiProjects. The best section for this is probably in "Personnel", in the paragraph discussing that liner notes can be inaccurate.
- The excessive listing of personnel has been mentioned. I suggest adding the following to the paragraph in the "Personnel" section beginning with "The credits to an album can be extensive or sparse.": If the listing of personnel is extensive, avoid excessive, exhaustive lists, in the spirit of WP:INDISCRIMINATE. In such cases, provide an external link to Discogs and list only the major personnel to the list.
If you have any additional suggestions, or suggestions regarding the wording of any of the above (I personally think that four needs to be tightened up or expressed better), please give them. I'm pinging the editors who raised issues with the essay as currently written, or were involved in discussing those issues, for their input regarding the above proposed changes. ModernDayTrilobite, PSA, Sergecross73, AstonishingTunesAdmirer, Caro7200, what do you think? Also, I realize that I never pinged Fezmar9, the author of the essay, for their thoughts on upgrading this essay to a guideline.--3family6 (Talk to me | See what I have done) 17:21, 15 January 2025 (UTC)
- The proposed edits all look good to me. I agree there's probably some room for improvement in the phrasing of #4, but in my opinion it's still clear enough as to be workable, and I haven't managed to strike upon any other phrasings I liked better for expressing its idea. If nobody else has suggestions, I'd be content to move forward with the language as currently proposed. ModernDayTrilobite (talk • contribs) 17:37, 15 January 2025 (UTC)
- It might be better to have this discussion on its talk page. That's where we usually talk about changes to a page. WhatamIdoing (talk) 17:38, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- Okay, thank you. I started the proposal to upgrade the essay here, as it would be far more noticed by the community, but I'm happy for everything to get moved there.-- 3family6 (Talk to me | See what I have done) 19:00, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- These changes look good to me. Although, since we got rid of Acclaimed Music in the articles, we should probably remove it here too. AstonishingTunesAdmirer 連絡 19:36, 15 January 2025 (UTC)
- Sure thing.--3family6 (Talk to me | See what I have done) 20:56, 15 January 2025 (UTC)
reverts all edits
Hello everyone. I have an idea for the Misplaced Pages coders. Would it be possible for you to design an option that, with the click of a button, automatically reverts all edits of a disruptive user? This idea came to my mind because some people create disposable accounts to cause disruption in all their edits... In this case, a lot of time and energy is consumed by administrators and reverting users to undo all the vandalism. If there were a template that could revert all the edits of a disruptive user with one click, it would be very helpful. If you think regular users might misuse this option, you could limit it to Misplaced Pages administrators only so they can quickly and easily undo the disruption. Hulu2024 (talk) 17:31, 13 January 2025 (UTC)
- Hi @Hulu2024, there's a script that does that: User:Writ Keeper/Scripts/massRollback. Also, editors who use Twinkle can single-click revert all consecutive edits of an editor. Schazjmd (talk) 17:44, 13 January 2025 (UTC)
- Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
- That script requires the Misplaced Pages:Rollback permission, which is available only for admins and other trusted users. Admins and other users with the tool have gotten in trouble for using it inappropriately. I never use it myself, as I find the rollback in Twinkle quite sufficient for my needs. Donald Albury 17:54, 13 January 2025 (UTC)
- (ec) I don't know about other languages. If you check the page I linked, you'll see that the script requires rollback rights. Schazjmd (talk) 17:55, 13 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- If you want this for the Persian Misplaced Pages, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @WhatamIdoing Thank you. Hulu2024 (talk) 07:11, 15 January 2025 (UTC)
- If you want this for the Persian Misplaced Pages, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
Problem For Translate page
Hello everyone. I don’t know who is in charge for coding the Translate page on Misplaced Pages. But I wanted to send my message to the Misplaced Pages coders, and that is that in the Misplaced Pages translation system, the information boxes for individual persons (i.e personal biography box- see: Template:Infobox person) are not automatically translated, and it is time-consuming for Misplaced Pages users to manually translate and change the links one by one from English to another language. Please, could the coders come up with a solution for translating the information template boxes? Thank you. Hulu2024 (talk) 17:32, 13 January 2025 (UTC)
- Hi Hulu2024, this also applies to the section above. If your proposal only applies to the English Misplaced Pages then it is probably best to post it at WP:VPT in the first instance. If it is only about the Persian Misplaced Pages then you may wish to try there. If it is more general then you could try Meta:, or, for more formal proposals, phabricator. Phil Bridger (talk) 18:51, 13 January 2025 (UTC)
- @Phil Bridger Thank you. Hulu2024 (talk) 19:21, 13 January 2025 (UTC)
A discrimination policy
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- i quit this will go no where im extremely embarassed and feel horrible i dont think ill try again
Ani cases:
I would like to start this proposal by saying that this concept was a proposal in 2009 which failed for obvious reasons. But in this year, 2025, we need it as its happened a bunch. its already under personal attacks but this I feel and a couple other Wikipedians that it should be codified as their is precedent for blocking users who discriminate. Here’s a list of the things I want to include in this policy. edit: This policy is intended to target blatant and admitted instances of discrimination. If the intent behind an action is ambiguous, users should continue to assume good until the intent is.
Just as being a member of a group does not give one special requirements to edit, it also does not endow any special privileges. One is not absolved of discrimination against a group just because one claims to be a member of that group.
What counts as discrimination
- Race
- Disability-will define this further
- Disease
- Gender-different from sex neurological
- Sex-different then gender biological
- Sexuality
- Religion
- Hobbies (e.g furry ( most often harassed hobby))
- Relationship status
- Martial status
- (Idk how to word this but) lack of parental presence
- Political position (will be a hot topic)
- Discrimination anything i missed would be in there
A disability is an umbrella term in my sight
you have mental and physical
examples for mental would be:
- schizophrenia
- autism
- ADHD
- PTSD
- mood disorders (depression, borderline personality disorder)
- dyslexia (or any learning disability)
examples of physical:
- paralyzation
- Pretty much any physical injury
- Im aware that this never really happens but its good to go over
A user may not claim without evidence that a user is affected by/are any of the above (idk how to term this).
A user may not claim that users with these disabilities/beliefs/races/genders shouldn’t edit Misplaced Pages.
A user may not imply a user is below them based on the person.
calling people woke simply cause they are queer is discrimination.
Also I would like to propose a condition.
Over reaction to what you think is discrimination (accidental misgendering and wrong pronouns) and the user apologizes for it is not grounds for an entry at ani.
This should be used as a guideline.
Misplaced Pages article on discrimination I would also like to say this would give us negative press coverage by right wing media and I’ll receive shit. But I don’t care i can deal with it •Cyberwolf•talk? 16:37, 16 January 2025 (UTC)discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them.
- This largely seems like behavior that already is sanctionable per WP:NPA and WP:UCOC (and the adoption of the latter drew complaints at the time that it in itself was already unnecessarily redundant with existing civility policy on en.wiki). What shortcomings do you see with those existing bodies of policy en force? signed, Rosguill 16:45, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill 17:07, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- Noting that I've now taken the time to read through the three cases listed at the top--two of them ended in NOTHERE blocks pretty quickly--I could see someone taking issue with the community's handling of RowanElder and Jwa05002, although it does seem that the discussion ultimately resulted in an indef block for one and an apparently sincere apology from the other. signed, Rosguill 17:13, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill 17:07, 16 January 2025 (UTC)
- I think the real problem is that in order to block for any reason you have to take them to a place where random editors discuss whether they are a "net positive" or "net negative" to the wiki, which in principle would be a fair way to decide, but in reality is like the work of opening an RFC just in order to get someone to stop saying random racist stuff, and it's not worth it. Besides, remember the RSP discussion where the Daily Mail couldn't be agreed to be declared unreliable on transgender topics because "being 'gender critical' is a valid opinion" according to about half the people there? I've seen comments that were blatant bigoted insults beneath a thin veneer, that people did not take to ANI because it's just not worth the huge amount of effort. There really needs to be an easy way for administrators to warn (on first violation) and then block people who harass people in discriminatory ways without a huge and exhausting-for-the-complainer "discussion" about it -- and a very clear policy that says discrimination is not OK and is always "net negative" for the encyclopedia would reduce the complexity of that discussion, and I think is an important statement to make.
- By allowing it to be exhaustively debated whether thinly-veiled homophobic insults towards gay people warrant banning is Misplaced Pages deliberately choosing not to take a stance on the topic. A stance needs to be taken, and it needs to be clear enough to allow rapid and decisive action that makes people actually afraid to discriminate against other editors, because they know that it isn't tolerated, rather than being reasonably confident their targets won't undergo another exhausting ANI discussion. Mrfoogles (talk) 17:04, 16 January 2025 (UTC)
- Said better then i could say i agree wholeheartedly it happens way too much •Cyberwolf•talk? 17:18, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- I agree that a blind eye shouldn't be turned against discrimination against groups of Misplaced Pages editors in general, but I don't see why we need a list that doesn't include social class but includes hobbies. The determining factor for deciding whether something is discrimination should be how much choice the individual has in the matter, which seems, in practice, to be the way WP:NPA is used. Phil Bridger (talk) 17:02, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- Sorry this was just me spit balling i personally have been harassed over my hobbies •Cyberwolf•talk? 17:07, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- @cyberwolf Strong support in general (see above) but I strongly suggest you take this to the idea lab, because it's not written as a clear and exact proposal and it would probably benefit a lot from being developed into an RFC before taking it here. In the current format it probably can't pass because it doesn't make specific changes to policy. Mrfoogles (talk) 17:08, 16 January 2025 (UTC)
- Yeah sorry I’m new to this i was told to go here to get the ball rolling •Cyberwolf•talk? 17:11, 16 January 2025 (UTC)
- Wait...does this mean I won't be able to discriminate against people whose hobby is editing Misplaced Pages? Where's the fun in that? Anonymous 17:09, 16 January 2025 (UTC)
- I guess not :3 •Cyberwolf•talk? 17:13, 16 January 2025 (UTC)
- In general, I fail to see the problem this is solving. The UCoC and other policies/guidelines/essays (such as WP:NPA, WP:FOC, and others) already prohibit discriminatory behavior. And normal conduct processes already have the ability to lay down the strictest punishment theoretically possible - an indefinite ban - for anyone who engages in such behavior.
- I do not like the idea of what amounts to bureaucracy for bureaucracy’s sake. That is the best way I can put it. At worst, this is virtue signaling - it’s waving a flag saying “hey, public and editors, Misplaced Pages cares about discrimination so much we made a specific policy about it” - without even saying the next part “but our existing policies already get people who discriminate against other editors banned, so this was not necessary and a waste of time”. I’ll happily admit I’m proven wrong if someone can show evidence of a case where actual discrimination was not acted upon because people were “concerned” it wasn’t violating one of those other policies. -bɜ:ʳkənhɪmez | me | talk to me! 20:56, 16 January 2025 (UTC)
- To clarify, all the comments about "why is this included" or "why is this not included" are part of the reason I'm against a specific policy like this. Any disruption can be handled by normal processes, and a specific policy will lead to wikilawyering over what is or is not discrimination. There is no need to try to define/specifically treat discrimination when all discriminatory behaviors are adequately covered by other policies already. -bɜ:ʳkənhɪmez | me | talk to me! 22:27, 16 January 2025 (UTC)
- We should be relating to other editors in a kind way. But this proposal appears to make the editing environment more hostile with more blocking on the opinion of one person. We do discrimonate against those that use Misplaced Pages for wrong purposes, such as vandalism, or advertising. Pushing a particular point of view is more grey area. The proposal by cyberwolf is partly point of view that many others would disagree with. So we should concentrate policies on how a user relates to other editors, rather than their motivations or opinions. Graeme Bartlett (talk) 20:50, 16 January 2025 (UTC)
- I think this is valuable by setting a redline for a certain sort of personal attack and saying, "this is a line nobody is permitted to cross while participating in this project." Simonm223 (talk) 20:57, 16 January 2025 (UTC)
- It is not possible for the content of a discussion to be "discriminatory". Discrimination is action, not speech. This proposal looks like an attempt to limit discourse to a certain point of view. That's not a good idea. --Trovatore (talk) 21:13, 16 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Cambridge says that discrimination is : "treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc".
- So yes, that includes speech because you can treat people differently in speech. Speech is an act. TarnishedPath 01:04, 17 January 2025 (UTC)
- OK, look, I'll concede part of the point here. Yes, if I'm a dick to (name of group) but not to (name of other group), I suppose that is discrimination, but I don't think a discrimination policy is a particularly useful tool for this, because what I should do is not be a dick to anybody.
- What I'm concerned about is that the policy would be used to assert that certain content is discriminatory. Say someone says, here's a reliable source that says biological sex is real and has important social consequences, and someone else says, you can't bring that up, it's discriminatory. Well, no, that's a category error. That sort of thing can't be discriminatory. --Trovatore (talk) 01:29, 17 January 2025 (UTC)
- just drop it •Cyberwolf•talk? 01:23, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- I would remove anything to do with polical position. Those on the far-right should be discriminated against. TarnishedPath 21:45, 16 January 2025 (UTC)
- The examples you use show that we've been dealing effectively without this additional set of guidelines; it would be more convincing that something was needed if you had examples where the lack of this policy caused bad outcomes. And I can see it being used as a hammer; while we're probably picturing "as a White man, I'm sure that I understand chemistry better than any of you lesser types" as what we're going after, I can see some folks trying to wield it against "as a Comanche raised on the Comanche nation, I think I have some insights on the Comanche language that others here are overlooking." As such, I'm cautious. -- Nat Gertler (talk) 21:49, 16 January 2025 (UTC)
- Comment. I am sorry that caste discrimination is being ignored here. Xxanthippe (talk) 21:54, 16 January 2025 (UTC).
- Not needed. Everything the proposal is talking about would constitute disruptive behavior, and we can block or ban someone for being disruptive already. No need to break disruption down into its component parts, and write rules for each. Blueboar (talk) 22:07, 16 January 2025 (UTC)
References
- Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
- Altinay, Murat; Anand, Amit (2020-08-01). "Neuroimaging gender dysphoria: a novel psychobiological model". Brain Imaging and Behavior. 14 (4): 1281–1297. doi:10.1007/s11682-019-00121-8. ISSN 1931-7565.
- Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
Repeated false retirement
There is a user (who shall remain unnamed) who has "retired" twice and had the template removed from their page by other users because they were clearly still editing. They are now on their third "retirement", yet they last edited a few days ago. I don't see any policy formally prohibiting such behavior, but it seems extremely unhelpful for obvious reasons. Anonymous 17:13, 16 January 2025 (UTC)
- Unless the material is harmful to Misplaced Pages or other users, users have considerable leeway in what they may post on their user page. Personally, I always take "retirement" notices with a grain of salt. If a user wants to claim they are retired even though they are still actively editing, I don't see the harm to anything but their credibility. If I want to know if an editor is currently active, I look at their contributions, not at notices on their user or talk page. Donald Albury 22:07, 16 January 2025 (UTC)
I can't imagine that this calls for a policy. You're allowed to be annoyed if you want. No one can take that away from you. But I'm missing an explanation of why the rest of us should care. --Trovatore (talk) 22:13, 16 January 2025 (UTC)- This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. Anonymous 22:41, 16 January 2025 (UTC)
- A "retirement notice" from a Misplaced Pages editor is approximately as credible as a "retirement notice" from a famous rock and roll band. Ignore it. Cullen328 (talk) 03:01, 20 January 2025 (UTC)
- FWIW, those two other editors were in the wrong to edit another person's user page for this kind of thing. And the retired banner does indicate: don't expect a quick response, even if I made an edit a few days or even minutes ago, as I may not be around much. Valereee (talk) 12:28, 20 January 2025 (UTC)
- This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. Anonymous 22:41, 16 January 2025 (UTC)
- There's a lot of active editors on the project, with retirement templates on their user pages. GoodDay (talk) 03:11, 20 January 2025 (UTC)
- I think it's kind of rude to edit someone else's user page unless there is an extreme reason, like reversing vandalism or something. On Misplaced Pages:User pages I don't see anything about retirement templates, but i do see it say "In general, one should avoid substantially editing another's user and user talk pages, except when it is likely edits are expected and/or will be helpful. If unsure, ask." If someone wants to identify as retired but sometimes drop by and edit, that doesn't seem to hurt anything. GeogSage 03:56, 20 January 2025 (UTC)
- Misplaced Pages is WP:NOTCOMPULSORY, so even a "non-retired" editor might never edit again. And if someone is "retired" but still constructively edits, just consider that a bonus. What's more problematic is a petulant editor who "retires", but returns and edits disruptively; in such case, it's their disruptive behavior that would be the issue, not a trivial retirement notice. —Bagumba (talk) 07:42, 20 January 2025 (UTC)
- As far as Misplaced Pages is concerned it's just another userbox you can put on your userpage. We only remove userboxes and userspace material if they're claiming to have a right that they don't (ie. a user with an Administrator toolbox who isn't an admin). Retirement is not an official term defined in policy anywhere, and being retired confers no special status. Pinguinn 🐧 11:13, 20 January 2025 (UTC)
- If you see a retirement template that seems to be false you could post a message on the user talk page to ask if they are really retired. I suppose it could be just a tiny bit disruptive if we cannot believe such templates, but nowhere near enough to warrant sanctions or a change in policy. Phil Bridger (talk) 13:39, 20 January 2025 (UTC)
What is the purpose of banning?
In thinking about a recent banned user's request to be unblocked, I've been reading WP:Blocking policy and WP:Banning policy trying to better understand the differences. In particular, I'm trying to better understand what criteria should be applied when deciding whether to end a sanction.
One thing that stuck me is that for blocks, we explicitly say Blocks are used to prevent damage or disruption to Misplaced Pages, not to punish users
. The implication being that a user should be unblocked if we're convinced they no longer present a threat of damage or disruption. No such statement exists for bans, which implies that bans are be a form of punishment. If that's the case, then the criteria should not just be "we think they'll behave themselves now", but "we think they've endured sufficiently onerous punishment to atone for their misbehavior", which is a fundamentally different thing.
I'm curious how other people feel about this. RoySmith (talk) 16:15, 20 January 2025 (UTC)
- My understanding (feel free to correct me if I am wrong) is that blocks are made by individual admins, and may be lifted by an admin (noting that CU blocks should only be lifted after clearance by a CU), while bans are imposed by ARBCOM or the community and require ARBCOM or community discussion to lift. Whether block or ban, a restriction on editing should only be imposed when it is the opinion of the admin, or ARBCOM, or the community, that such restriction is necessary to protect the encyclopedia from further harm or disruption. I thinks bans carry the implication that there is less chance that the banned editor will be able to successfully return to editing than is the case for blocked editors, but that is not a punishment, it is a determination of what is needed to protect WP in the future. Donald Albury 16:44, 20 January 2025 (UTC)
- Good question. I'm interested in what ban evasion sources think about current policies, people who have created multiple accounts, been processed at SPI multiple times, made substantial numbers of edits, the majority of which are usually preserved by the community in practice for complicated reasons (a form of reward in my view - the community sends ban evading actors very mixed messages). What's their perspective on blocks and bans and how to reduce evasion? It is not easy to get this kind of information unfortunately as people who evade bans and blocks are not very chatty it seems. But I have a little bit of data from one source for interest, Irtapil. Here are a couple of views from the other side.
- On socking - "automatic second chance after first offense with a 2 week ban / block, needs to be easier than making a third one so people don't get stuck in the loop"
- On encouraging better conduct - "they need to gently restrict people, not shun and obliterate"
- No comment on the merits of these views, or whether punishment is what is actually happening, or is required, or effective, but it seems clear that it is likely to be perceived as punishment and counterproductive (perhaps unsurprisingly) by some affected parties. Sean.hoyland (talk) 17:31, 20 January 2025 (UTC)
- Blocks are a sanction authorized by the community to be placed by administrators on their own initiative, for specific violations as described by a policy, guideline, or arbitration remedy (in which case the community authorization is via the delegated authority to the arbitration committee). Blocks can also be placed to enforce an editing restriction. A ban is an editing restriction. As described on the banning policy page, it is a
formal prohibition from editing some or all pages on the English Misplaced Pages, or a formal prohibition from making certain types of edits on Misplaced Pages pages. Bans can be imposed for a specified or an indefinite duration.
Aside from cases where the community has delegated authority to admins to enact bans on their own initiative, either through community authorization of discretionary sanctions, or arbitration committee designated contentious topics, editing restrictions are authorized through community discussion. They cover cases where there isn't a single specific violation for which blocking is authorized by guidance/arbitration remedy, and so a pattern of behaviour and the specific circumstances of the situation have to be discussed and a community consensus established. - Historically, removing blocks and bans require a consensus from the authorizing party that removing it will be beneficial to the project. Generally, the community doesn't like to impose editing restrictions when there is promise for improved behaviour, so they're enacted for more severe cases of poor behaviour. Thus it's not unusual that the community is somewhat skeptical about lifting recently enacted restrictions (where "recent" can vary based on the degree of poor behaviour and the views of each community member). Personally I don't think this means an atonement period should be mandated. isaacl (talk) 18:33, 20 January 2025 (UTC)
- I think that a block is a preventive measure, whereas a ban is where the community's reached a consensus to uninvite a particular person from the site. Misplaced Pages is the site that anyone can edit, except for a few people we've decided we can't or won't work with. A ban is imposed by a sysop on behalf of the community whereas a block is imposed on their own authority.—S Marshall T/C 19:39, 20 January 2025 (UTC)
- A ban does not always stop you from editing Misplaced Pages. It may prohibit you from editing in a certain topic area (BLP for example or policies) but you can still edit other areas. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:24, 23 January 2025 (UTC)
- Seems to be addressed in WP:BMB, which explains that the criteria is not dependent upon an editor merely behaving with what appears to be "
good or good-faith edits
". A ban is based on a persistent or long-term pattern of editing behavior that demonstrates a significant risk of "disruption, issues, or harm
" to the area in which they are banned from, despite any number of positive contributions said editor has made or is willing to make moving forward. As such, it naturally requires a higher degree of review (i.e. a form of community consensus) to be imposed or removed (though many simply expire upon a pre-determined expiration date without review). While some may interpret bans as a form of punishment, they are still a preventative measure at their core. At least that's my understanding. --GoneIn60 (talk) 12:59, 21 January 2025 (UTC)
Contacting/discussing organizations that fund Misplaced Pages editing
I have seen it asserted that contacting another editor's employer is always harassment and therefore grounds for an indefinite block without warning. I absolutely get why we take it seriously and 99% of the time this norm makes sense. (I'm using the term "norm" because I haven't seen it explicitly written in policy.)
In some cases there is a conflict between this norm and the ways in which we handle disruptive editing that is funded by organizations. There are many types of organizations that fund disruptive editing - paid editing consultants, corporations promoting themselves, and state propaganda departments, to name a few. Sometimes the disruption is borderline or unintentional. There have been, for instance, WMF-affiliated outreach projects that resulted in copyright violations or other crap being added to articles.
We regularly talk on-wiki and off-wiki about organizations that fund Misplaced Pages editing. Sometimes there is consensus that the organization should either stop funding Misplaced Pages editing or should significantly change the way they're going about it. Sometimes the WMF legal team sends cease-and-desist letters.
Now here's the rub: Some of these organizations employ Misplaced Pages editors. If a view is expressed that the organizations should stop the disruptive editing, it is foreseeable that an editor will lose a source of income. Is it harassment for an editor to say "Organization X should stop/modify what it's doing to Misplaced Pages?" at AN/I? Of course not. Is it harassment for an editor to express the same view in a social media post? I doubt we would see it that way unless it names a specific editor.
Yet we've got this norm that we absolutely must not contact any organization that pays a Misplaced Pages editor, because this is a violation of the harassment policy. Where this leads is a bizarre situation in which we are allowed to discuss our beef with a particular organization on AN/I but nobody is allowed to email the organization even to say, "Hey, we're having a public discussion about you."
I propose that if an organization is reasonably suspected to be funding Misplaced Pages editing, contacting the organization should not in and of itself be considered harassment. I ask that in this discussion, we not refer to real cases of alleged harassment, both to avoid bias-inducing emotional baggage and to prevent distress to those involved. Clayoquot (talk | contribs) 03:29, 22 January 2025 (UTC)
- If it's needful to contact an organisation about one of their employees' edits, Trust and Safety should do that. Not volunteers.—S Marshall T/C 09:21, 22 January 2025 (UTC)
- Let's say Acme Corporation has been spamming Misplaced Pages. If you post on Twitter "Acme has been spamming Misplaced Pages" is that harassment? How about if you write "@Acme has been spamming Misplaced Pages?" Should only Trust and Safety be allowed to add the @ sign? Clayoquot (talk | contribs) 15:43, 22 January 2025 (UTC)
- What you post on Twitter isn't something Misplaced Pages can control. But contacting another editor's employer about that editor's edits has a dark history on Misplaced Pages.—S Marshall T/C 15:49, 22 January 2025 (UTC)
- The history is dark indeed. What I'm pointing out is that writing "@Acme has been spamming Misplaced Pages" on Twitter is contacting another editor's employer. Should you be indef blocked without warning for doing that? Clayoquot (talk | contribs) 15:56, 22 January 2025 (UTC)
- You want an "in principle" discussion without talking about specific cases, so the only way I can answer that is to say: Not always, but depending on the surrounding circumstances, possibly.—S Marshall T/C 16:11, 22 January 2025 (UTC)
- I agree. You said it better than I did. Clayoquot (talk | contribs) 18:56, 22 January 2025 (UTC)
- You want an "in principle" discussion without talking about specific cases, so the only way I can answer that is to say: Not always, but depending on the surrounding circumstances, possibly.—S Marshall T/C 16:11, 22 January 2025 (UTC)
- The history is dark indeed. What I'm pointing out is that writing "@Acme has been spamming Misplaced Pages" on Twitter is contacting another editor's employer. Should you be indef blocked without warning for doing that? Clayoquot (talk | contribs) 15:56, 22 January 2025 (UTC)
- What you post on Twitter isn't something Misplaced Pages can control. But contacting another editor's employer about that editor's edits has a dark history on Misplaced Pages.—S Marshall T/C 15:49, 22 January 2025 (UTC)
- Let's say Acme Corporation has been spamming Misplaced Pages. If you post on Twitter "Acme has been spamming Misplaced Pages" is that harassment? How about if you write "@Acme has been spamming Misplaced Pages?" Should only Trust and Safety be allowed to add the @ sign? Clayoquot (talk | contribs) 15:43, 22 January 2025 (UTC)
Another issue is that it sometimes doing that can place another link or two in a wp:outing chain, and IMO avoiding that is of immense importance. The way that you posed the question with the very high bar of "always" is probably not the most useful for the discussion. Also, a case like this is almost always involves a concern about a particular editor or center around edits made by a particular editor, which I think is a non-typical omission from your hypothetical example. Sincerely, North8000 (talk) 19:41, 22 January 2025 (UTC)
- I'm not sure what you mean by placing a link in an outing chain. Can you explain this further? I used the very high bar of "always" because I have seen admins refer to it as an "always" or a "bright line" and this shuts down the conversation. Changing the norm from "is always harassment" to "is usually harassment" is exactly what I'm trying to do.
- Organizations that fund disruptive editing often hire just one person to do it but I've also seen plenty of initiatives that involve money being distributed widely, sometimes in the form of giving perks to volunteers. If the organization is represented by only one editor then there is obviously a stronger argument that contacting the organization constitutes harassment. Clayoquot (talk | contribs) 06:44, 23 January 2025 (UTC)
General reliability discussions have failed at reducing discussion, have become locus of conflict with external parties, and should be curtailed
The original WP:DAILYMAIL discussion, which set off these general reliability discussions in 2017, was supposed to reduce discussion about it, something which it obviously failed to do since we have had more than 20 different discussions about its reliability since then. Generally speaking, a review of WP:RSNP does not support the idea that general reliability discussions have reduced discussion about the reliability of sources either. Instead, we see that we have repeated discussions about the reliability of sources, even where their reliability was never seriously questioned. We have had a grand total of 22 separate discussions about the reliability of the BBC, for example, 10 of which have been held since 2018. We have repeated discussions about sources that are cited in relatively few articles (e.g., Jacobin).
Moreover these discussions spark unnecessary conflict with parties off wiki that harm the reputation of the project. Most recently we have had an unnecessary conflict with the Anti-Defamation League sparked by a general reliability discussion with them, but the original Daily Mail discussion did this also. In neither case was usage of the source a problem generally on Misplaced Pages in any way that has been lessened by their deprecation - they were neither widely-used, nor permitted to be used in a way that was problematic by existing policy on using reliable sources.
There is also some evidence, particularly from WP:PIA5, that some editors have sought to "claim scalps" by getting sources they are opposed to on ideological grounds 'banned' from Misplaced Pages. Comments in such discussions are often heavily influenced by people's impression of the bias of the source.
I think a the very least we need a WP:BEFORE-like requirement for these discussions, where the editors bringing the discussion have to show that the source is one for which the reliability of which has serious consequences for content on Misplaced Pages, and that they have tried to resolve the matter in other ways. The recent discussion about Jacobin, triggered simply by a comment by a Jacobin writer on Reddit, would be an example of a discussion that would be stopped by such a requirement. FOARP (talk) 15:54, 22 January 2025 (UTC)
- The purpose of this proposal is to reduce discussion of sources. I feel that evaluating the reliability of sources is the single most important thing that we as a community can do, and I don't want to reduce the amount of discussion about sources. So I would object to this.—S Marshall T/C 16:36, 22 January 2025 (UTC)
- I don't thinks meant to reduce but instead start more discussions at a more appropriate level than at VPP or RSP. Starting the discussion at the VPP/RSP level means you are trying to get all editors involved, which for most cases isn't really appropriate ( eg one editor has a beef about a source and brings it to wide discussion before getting other input first). Foarp us right that when these discussion are first opened at VPP or RSP without prior attempts to resolve elsewhere is a wear on the process. — Masem (t) 16:55, 22 January 2025 (UTC)
- Oh, well that makes more sense. We could expand WP:RFCBEFORE to cover WP:RSP?—S Marshall T/C 17:06, 22 January 2025 (UTC)
- Basically this. I favour something for RSP along the lines of WP:BEFORE/WP:RFCBEFORE, an WP:RSPBEFORE if you will. FOARP (talk) 21:50, 22 January 2025 (UTC)
- Oh, well that makes more sense. We could expand WP:RFCBEFORE to cover WP:RSP?—S Marshall T/C 17:06, 22 January 2025 (UTC)
- I don't thinks meant to reduce but instead start more discussions at a more appropriate level than at VPP or RSP. Starting the discussion at the VPP/RSP level means you are trying to get all editors involved, which for most cases isn't really appropriate ( eg one editor has a beef about a source and brings it to wide discussion before getting other input first). Foarp us right that when these discussion are first opened at VPP or RSP without prior attempts to resolve elsewhere is a wear on the process. — Masem (t) 16:55, 22 January 2025 (UTC)
- Yeah I would support anything to reduce the constant attempts to kill sources at RSN. It has become one of the busiest pages on all of Misplaced Pages, maybe even surpassing ANI. -- GreenC 19:36, 22 January 2025 (UTC)
- Oddly enough, I am wondering why this discussion is here? And not Talk RSN:Misplaced Pages talk:Reliable sources/Noticeboard, as it now seems to be a process discussion (more BEFORE) for RSN? Alanscottwalker (talk) 22:41, 22 January 2025 (UTC)
- Some confusion about pages here, with some mentions of RSP actually referring to RSN. RSN is a type of "before" for RSP, and RSP is intended as a summary of repeated RSN discussions. One purpose of RSP is to put a lid on discussion of sources that have appeared at RSN too many times. This isn't always successful, but I don't see a proposal here to alleviate that. Few discussions are started at RSP; they are started at RSN and may or may not result in a listing or a change at RSP. Also, many of the sources listed at RSP got there due to a formal RfC at RSN, so they were already subject to RFCBEFORE (not always obeyed). I'm wondering how many listings at RSN are created due to an unresolved discussion on an article talk page—I predict it is quite a lot. Zero 04:40, 23 January 2025 (UTC)
- “Not always obeyed” is putting it mildly. FOARP (talk) 06:47, 23 January 2025 (UTC)
Primary sources vs Secondary sources
Main page: Misplaced Pages talk:Manual of Style/Television § Episode CountsThe discussion above has spiralled out of control, and needs clarification. The discussion revolves around how to count episodes for TV series when a traditionally shorter episode (e.g., 30 minutes) is broadcast as a longer special (e.g., 60 minutes). The main point of contention is whether such episodes should count as one episode (since they aired as a single entity) or two episodes (reflecting production codes and industry norms).
The simple question is: when primary sources and secondary sources conflict, which we do use on Misplaced Pages?
- The contentious article behind this discussion is at List of Good Luck Charlie episodes, in which Deadline, TVLine and The Futon Critic all state that the series has 100 episodes; this article from TFC, which is a direct copy of the press release from Disney Channel, also states that the series has "100 half-hour episodes".
- The article has 97 episodes listed; the discrepancy is from three particular episodes that are all an hour long (in a traditionally half-hour long slot). These episode receive two production codes, indicating two episodes, but each aired as one singular, continuous release. An editor argues that the definition of an episode means that these count as a singular episode, and stand by these episode being the important primary sources.
- The discussion above discusses what an episode is. Should these be considered one episode (per the primary source of the episode), or two episodes (per the secondary sources provided)? This is where the primary conflict is.
- Multiple editors have stated that the secondary sources refer to the production of the episodes, despite the secondary sources not using this word in any format, and that the primary sources therefore override the "incorrect" information of the secondary sources. Some editors have argued that there are 97 episodes, because that's what's listed in the article.
- WP:CALC has been cited;
Routine calculations do not count as original research, provided there is consensus among editors that the results of the calculations are correct, and a meaningful reflection of the sources
. An editor argues that there is not the required consensus. WP:VPT was also cited.
Another example was provided at Abbott Elementary season 3#ep36.
- The same editor arguing for the importance of the primary source stated that he would have listed this as one episode, despite a reliable source stating that there is 14 episodes in the season.
- WP:PSTS has been quoted multiple times:
Misplaced Pages articles usually rely on material from reliable secondary sources. Articles may make an analytic, evaluative, interpretive, or synthetic claim only if it has been published by a reliable secondary source.
While a primary source is generally the best source for its own contents, even over a summary of the primary source elsewhere, do not put undue weight on its contents.
Do not analyze, evaluate, interpret, or synthesize material found in a primary source yourself; instead, refer to reliable secondary sources that do so.
- Other quotes from the editors arguing for the importance of primary over secondary includes:
When a secondary source conflicts with a primary source we have an issue to be explained but when the primary source is something like the episodes themselves and what is in them and there is a conflict, we should go with the primary source.
We shouldn't be doing "is considered to be"s, we should be documenting what actually happened as shown by sources, the primary authoritative sources overriding conflicting secondary sources.
Yep, secondary sources are not perfect and when they conflict with authoritative primary sources such as released films and TV episodes we should go with what is in that primary source.
Having summarized this discussion, the question remains: when primary sources and secondary sources conflict, which we do use on Misplaced Pages?
- Primary, as the episodes are authoritative for factual information, such as runtime and presentation?
- Or secondary, which guide Misplaced Pages's content over primary interpretations?
-- Alex_21 TALK 22:22, 23 January 2025 (UTC)
- As someone who has never watched Abbott Elementary, the example given at Abbott Elementary season 3#ep36 would be confusing to me. If we are going to say that something with one title, released as a single unit, is actually two episodes we should provide some sort of explanation for that. I would also not consider this source reliable for the claim that there were 14 episodes in the season. It was published three months before the season began to air; even if the unnamed sources were correct when it was written that the season was planned to have 14 episodes, plans can change. Caeciliusinhorto-public (talk) 10:13, 24 January 2025 (UTC)
- Here is an alternate source, after the premiere's release, that specifically states the finale episode as Episode 14. (Another) And what of your thoughts for the initial argument and contested article, where the sources were also posted after the multiple multi-part episode releases? -- Alex_21 TALK 10:48, 24 January 2025 (UTC)
- Vulture does say there were 14 episodes in that season, but it also repeatedly describes "Career Day" (episode 1/2 of season 3) in the singular as "the episode" in its review and never as "the episodes". Similarly IndieWire and Variety refer to "the supersized premiere episode, 'Career Day'" and "the mega-sized opener titled 'Career Day Part 1 & 2'" respectively, and treat it largely as a single episode in their reviews, though both acknowledge that it is divided into two parts.
- If reliable sources do all agree that the one-hour episodes are actually two episodes run back-to-back, then we should conform to what the sources say, but that is sufficiently unexpected (and even the sources are clearly not consistent in treating these all as two consecutive episodes) that we do need to at least explain that to our readers.
- In the case of Good Luck Charlie, while there clearly are sources saying that there were 100 episodes, none of them seem to say which episodes are considered to be two, and I would consider "despite airing under a single title in a single timeslot, this is two episodes" to be a claim which is likely to be challenged and thus require an inline citation per WP:V. I have searched and I am unable to find a source which supports the claim that e.g episode 3x07 "Special Delivery" is actually two episodes. Caeciliusinhorto-public (talk) 12:18, 24 January 2025 (UTC)
- Here is an alternate source, after the premiere's release, that specifically states the finale episode as Episode 14. (Another) And what of your thoughts for the initial argument and contested article, where the sources were also posted after the multiple multi-part episode releases? -- Alex_21 TALK 10:48, 24 January 2025 (UTC)
- If a series had 94 half-hour episodes and three of one hour why not just say that? Phil Bridger (talk) 11:04, 24 January 2025 (UTC)
- What would you propose be listed in the first column of the tables at List of Good Luck Charlie episodes, and in the infobox at Good Luck Charlie?
- Contentious article aside, my question remains as to whether primary or secondary sources are what we based Misplaced Pages upon. -- Alex_21 TALK 11:11, 24 January 2025 (UTC)
Request for research input to inform policy proposals about banners & logos
I am leading an initiative to review and make recommendations on updates to policies and procedures governing decisions to run project banners or make temporary logo changes. The initiative is focused on ensuring that project decisions to run a banner or temporarily change their logo in response to an “external” event (such as a development in the news or proposed legislation) are made based on criteria and values that are shared by the global Wikimedia community. The first phase of the initiative is research into past examples of relevant community discussions and decisions. If you have examples to contribute, please do so on the Meta-Wiki page. Thanks! --CRoslof (WMF) (talk) 00:04, 24 January 2025 (UTC)
- @CRoslof (WMF): Was this initiative in the works before ar-wiki's action regarding Palestine, or was it prompted by that? voorts (talk/contributions) 02:03, 24 January 2025 (UTC)
RfC: Amending ATD-R
|
Should WP:ATD-R be amended as follows:
− | A page can be ] if there is a suitable page to redirect to, and if the resulting redirect is not ]. If the change is | + | A page can be ] if there is a suitable page to redirect to, and if the resulting redirect is not ]. If the change is disputed, such as by ], an attempt should be made to reach a ] before blank-and-redirecting again. The proper venue for doing so is ], although sometimes the dispute may be resolved on the article's talk page. |
Prior discussion: Misplaced Pages talk:Deletion policy#Amending ATD-R
Support
- As proposer. This reflects existing consensus and current practice. Blanking of article content should be discussed at AfD, not another venue. If someone contests a BLAR, they're contesting the fact that article content was removed, not that a redirect exists. The venue matters because different sets of editors patrol AfD and RfD. voorts (talk/contributions) 01:54, 24 January 2025 (UTC)
- Summoned by bot. I broadly support this clarification. However, I think it could be made even clearer that, in lieu of an AfD, if a consensus on the talkpage emerges that it should be merged to another article, that suffices and reverting a BLAR doesn't change that consensus without good reason. As written, I worry that the interpretation will be "if it's contested, it must go to AfD". I'd recommend the following:
This may be done through either a merge discussion on the talkpage that results in a clear consensus to merge. Alternatively, or if a clear consensus on the talkpage does not form, the article should be submitted through Articles for Deletion for a broader consensus to emerge.
That said, I'm not so miffed with the proposed wording to oppose it. -bɜ:ʳkənhɪmez | me | talk to me! 02:35, 24 January 2025 (UTC)- I don't see this proposal as precluding a merge discussion. voorts (talk/contributions) 02:46, 24 January 2025 (UTC)
- I don't either, but I see the wording of
although sometimes the dispute may be resolved on the article's talk page
closer to "if the person who contested/reverted agrees on the talk page, you don't need an AfD" rather than "if a consensus on the talk page is that the revert was wrong, an AfD is not needed". The second is what I see general consensus as, not the first. -bɜ:ʳkənhɪmez | me | talk to me! 02:53, 24 January 2025 (UTC)
- I don't either, but I see the wording of
- I don't see this proposal as precluding a merge discussion. voorts (talk/contributions) 02:46, 24 January 2025 (UTC)
- I broadly support the idea, an AFD is going to get more eyes than an obscure talkpage, so I suspect it is the better venue in most cases. I'm also unsure how to work this nuance in to the prose, and not suspect the rare cases where another forum would be better, such a forum might emerge anyway. CMD (talk) 03:28, 24 January 2025 (UTC)
- Support per my extensive comments in the prior discussion. Thryduulf (talk) 11:15, 24 January 2025 (UTC)
Oppose
Discussion
- not entirely sure i should vote, but i should probably mention this discussion in wt:redirect that preceded the one about atd-r, and i do think this rfc should affect that as well, but wouldn't be surprised if it required another one consarn (speak evil) (see evil) 12:38, 24 January 2025 (UTC)