Wikipedia:Village pump (proposals)

From Wikipedia the free encyclopedia

 Policy Technical Proposals Idea lab WMF Miscellaneous 

New proposals are discussed here. Before submitting:

« Archives, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169

Proposed: a massive automated invasion of privacy for the greater good[edit]

This is going to be a very controversial proposal, so here is my framing device.

Imagine that you managed a department store where shoplifting was rampant. There are cameras set up all over the store, recording everything that happens, so every act of shoplifting could theoretically be caught. However, you have a strict rule (in order to protect shopper privacy) that no security person will look at any of these cameras or any of their recorded footage unless a customer comes to complain of some observed or suspected instance of shoplifting. What if, however, instead of having a security person look at the cameras, you trained a bot to view the footage and flag actions that were likely instances of shoplifting, which the security people could then review?

I propose to apply a model like that to the problem of sockpuppetry. All the data needed to be reviewed to determine who is in fact engaged in sock puppetry is already stored in our servers, so why don't we set a bot to a task of reviewing all the edits that have been made over some reasonable period (perhaps some number of weeks), and then flag to the attention of the Checkusers a list all of the instances of probable sock puppetry in that period? To be clear, this proposed review would be carried out by a bot rather than a human, and would be done indiscriminately. The only information being reported for human eyes would be actual instances of likely sockpuppetry violations, and that information would only be reported to the Checkusers. BD2412 T 15:40, 7 May 2020 (UTC)

  • No. This is not even good as satire. 50.74.165.202 (talk) 15:50, 7 May 2020 (UTC)
    It is interesting, however, that the first opposition comes from an IP with a grand total of 14 edits. BD2412 T 16:21, 7 May 2020 (UTC)
    While this reply is a month old it's still a very clear personal attack. You're clearly insulting this person because they're using an IP and have a low edit count; possibly implying the reason they don't like your proposal is because they themselves are a sock? This is rude and uncalled for and I would remove your comment but for the fact other editors seem intent on keeping it. Implying that someone's opinion is worth less because they edit from an IP is wrong and should be banned by policy. Chess (talk) (please use {{ping|Chess}} on reply) 21:36, 22 June 2020 (UTC)
  • Let's say someone volunteered to run machine learning for sockpuppetry and it worked - what do you think it would tell us? SportingFlyer T·C 16:28, 7 May 2020 (UTC)
    • I expect that it would tell us, for example, when one conflicted individual is trying to fool us into thinking that multiple editors independently support a position in a discussion. BD2412 T 17:10, 7 May 2020 (UTC)
  • I guess the question I have is "why", really. Why do we need such a thing? I can see that it's something that's possible, but I'm not clear on what the advantage would be, apart from sockpuppeters being "more effectively punished", even if they aren't actually doing any harm. I know there's the argument that they're doing harm just by sockpuppeting, and I have a great deal of sympathy for that, but I don't think a punitive effect alone, rather than actually helping the encyclopedia in any way, would justify measures like this. Naypta ☺ | ✉ talk page | 16:37, 7 May 2020 (UTC)
    • I'm actually not particularly concerned with "punishing" sockpuppetry, but we all know that there are plenty of instances where deletion discussions or content discussions for articles on companies, commercial products, celebrities, and the like are influenced by sockpuppet accounts appearing as independent editors. That sort of conduct should be stopped, even where the sockpuppeteers are successful in hiding their misconduct. BD2412 T 17:08, 7 May 2020 (UTC)
      • To my mind, the far more obvious problem to address there in which case is the situations in which you feel !votes are being treated as votes for the purposes of establishing consensus. I can see it might lead to issues of false consensus, but I suspect those sorts of problems would reveal themselves fairly quickly, in the same way in which we deal with meatpuppetry. Naypta ☺ | ✉ talk page | 17:51, 7 May 2020 (UTC)
        • There are discussions where reasonable arguments are being made on both sides, but where one side has strong numerical support. That is to say, there are instances where sockpuppetry, while not obvious or mindless, can turn the outcome of a discussion. BD2412 T 18:06, 7 May 2020 (UTC)
  • I would suspect a bot like this would be incapable of telling an illegitimate sockpuppet from a valid one. Not to mention that if we are relying on data which is shielded per the priv-pol there's a not-inconsiderable risk of false positives from public terminals (not as much of an issue now but will become one once shelter-in-place and similar orders are lifted). —A little blue Bori v^_^v Onward to 2020 17:04, 7 May 2020 (UTC)
    • There are activities that a valid sockpuppet should never be engaging in, like voting multiple times in an XfD process under different usernames, or having a back-and-forth with itself made to appear as if two unconnected editors were having that discussion. The parameters of a bot review can be tweaked to focus on instances of conduct like that. BD2412 T 17:12, 7 May 2020 (UTC)
  • Transcluded to Wikipedia talk:CheckUser and Wikipedia talk:Sock puppetry. * Pppery * it has begun... 17:20, 7 May 2020 (UTC)
  • Theres been talk by global CUs of using machine learning on a limited basis to look at the behaviour between accounts (not technical). I think this is fine and wouldn’t be a privacy violation anymore than the editor interaction utility. If we’re looking at publicly available data, academics are already using it in studies of abuse on Wikipedia. Not a big deal there. I’m not really sure a bot running on every account makes sense, though.
    I would not support a bot or machine learning on CU data as that’s vague enough where anything the machine learning produces would likely be overwhelming and not useful. TonyBallioni (talk) 17:31, 7 May 2020 (UTC)
  • This is a completely bizarre idea. And I don't know why a transclusion is necessary, Pppery a simple link on those other pages surely would have sufficed. But the IP^^^ makes a good point about satire. serial# 17:32, 7 May 2020 (UTC)
  • BD2412, you seem to be assuming that one IP address equals one user. That's not always the case. Everyone in my house uses same router, so we share an IP address, but we each have our own accounts. If I attend an edit-a-thon, I share the same IP with all the other attendees. I don't think I should be investigated based on that "evidence". Vexations (talk) 17:36, 7 May 2020 (UTC)
    • I have not doubt that there are instances that may look like sockpuppetry but have an innocent explanation. However, there are also instances that will look like sockpuppetry because they are in fact sockpuppetry, which would otherwise go unnoticed and allow Wikipedia to be spammed with commercial content or the like. BD2412 T 17:44, 7 May 2020 (UTC)
  • Where does the "invasion of privacy" come in? If the proposal is to have a bot analyze edits to look for patterns, have at it, that involves no private data. If the proposal is to have the bot look at everyone's user agent and/or IP addresses and flag the overlaps, thats basically an automated checkuser-everyone, which will probably be a useless invasion of privacy, as it will just tell us, for example, which users are from the same university. I think some behavior probable cause should continue to be required, as it is now, for a CU to be run. Levivich[dubiousdiscuss] 17:41, 7 May 2020 (UTC)
    • I had rather assumed that people would automatically consider this an invasion of some kind of privacy. What I am really interested in is finding instances where different registered accounts from the same IP address are participating in discussions or appearing to talk to each other, which is where suspicious should arise. BD2412 T 17:47, 7 May 2020 (UTC)
      • @BD2412: I don’t mind sharing that some smaller version of the behavioural analysis that Levivich mentioned is being worked on by a CU on another project as a tool. People on non-English wikis (read: authoritarian regimes) were more concerned with the privacy aspect for understandable reasons. My argument is something along the lines of We literally already have researchers doing this. There’s absolutely nothing private here. It’s a decision aid to help focus resources, and would likely be likely to decrease use of CU and protect against false positives. I don’t see this as a privacy policy issue because as part of the nature of an open wiki, literally everything is public and people already have AI being run on their edits (see ORES). ST47 might have more to add. TonyBallioni (talk) 18:04, 7 May 2020 (UTC)
      • @BD2412 and ST47: fix ping. TonyBallioni (talk) 18:04, 7 May 2020 (UTC)
        • I would agree that if something along these lines is already being done, then there's no need for a proposal for it to be done. However, I had not previously heard of such an effort. BD2412 T 18:08, 7 May 2020 (UTC)
          • Yeah, it’s very early stages but wouldn’t impact the privacy policy and would be focused on behaviour, not IPs. I don’t think it would be run automatically either. Basically the idea that’s been floated is to use a tool that looks at certain similarities to be a behavioural analysis tool that humans can then look at as a decision aid. Like I said, early stage, but the idea of using additional tools has been discussed. TonyBallioni (talk) 18:14, 7 May 2020 (UTC)
  • I think this is a great idea. However, who is going to run the bot? We would also need to see the source code.--SharʿabSalam▼ (talk) 18:06, 7 May 2020 (UTC)
    SharabSalam, if we're using publicly available data only (which may be the case), I don't think either question matters. Source code doesn't help because it's never possible to prove that's the source running on the bot. And neither point matters because publicly scraping Wikipedia can be done by anyone.
    If we're using private data, I think ultimately this would have to be added as a core part of the software, or a bot ran by the WMF. Maybe I'd support the CheckUsers, collectively, authorising the usage of a bot hosted by one of them -- maybe. But no private user, or administrator in this case, however, should be using non-public data in their individual capacity. ProcrastinatingReader (talk) 15:53, 16 June 2020 (UTC)
  • @BD2412: What about the prosecutor's fallacy and the birthday paradox? There will be thousands of editors every day who by chance alone will have an IP+Useragent match with a completely unrelated user. And some of those, will, by chance, be participating in the same discussions. That's why fish CheckUser is not for fishing. How would the bot distinguish socks from unlucky matches? Suffusion of Yellow (talk) 22:08, 7 May 2020 (UTC)
    • @Suffusion of Yellow: [...]thousands of editors every day who by chance alone will have an IP+Useragent match this is actually much less common than it used to be, but sure, it happens. Usually exact IP+UA at the same time is the same person. It requires human judgement and we tend to be fairly conservative in blocking. The bigger issue would be on ranges, where you’d be overwhelmed easily and the data would be fairly useless unless you knew what you’re looking for. TonyBallioni (talk) 22:19, 7 May 2020 (UTC)
      • @TonyBallioni: To be fair, whilst it might be less common than it used to be for now, I strongly suspect it'll increase rapidly over the next few years as IPv4 address exhaustion forces more ISPs to implement carrier-grade NAT. It remains to be seen whether IPv6 adoption will continue at the same rate - I hope it does, but at least over here in the UK, very few ISPs currently support v6, and even fewer have it as a default. At the point at which CGNAT is the norm for residential networks, like how it presently is for many mobile data networks, that issue of multi-user IPs is going to become bigger. Naypta ☺ | ✉ talk page | 22:23, 7 May 2020 (UTC)
        • Well, yeah, the UK is second only to Nepal in being the exception to my above statement :) TonyBallioni (talk) 22:26, 7 May 2020 (UTC)
    • The Checkusers would be the ones to make that distinction. Actual suspect cases (matches of identity occurring on the same article or discussion) will likely be a small number—unless the problem itself is much bigger than anyone realizes. BD2412 T 22:22, 7 May 2020 (UTC)
    Suffusion of Yellow, we can use fingerprints too. There are various ways to get very unique identifiers out of people. ProcrastinatingReader (talk) 15:54, 16 June 2020 (UTC)
  • I will never support a proposal with a section title like that. Merits aside (this is an area I have no interest or knowledge in), you've poisoned the well for me, and I suspect I'm not alone. —⁠烏⁠Γ (kaw)  22:35, 07 May 2020 (UTC)
    • As it turns out, the point is rather moot. BD2412 T 16:07, 10 May 2020 (UTC)
      • What? —⁠烏⁠Γ (kaw)  00:23, 12 May 2020 (UTC)
  • It might be easier to start with a narrower approach that could serve as a proof of concept. Maybe a bot that searches for the known characteristics of specific LTAs, if that doesn't already exist. Otherwise, maybe a bot that specifically checks XfDs, or even just XfDs in specific categories where sockpuppetry is likely to be common. Sunrise (talk) 15:57, 10 May 2020 (UTC)
  • Would this be permissible under the various privacy regulations that the WMF may be subject to? If so, would the WMF need to re-write its privacy policy to reflect this additional processing of personal information? Also, I share the sentiments of User:KarasuGamma. Privacy is important, and I cannot support "a massive automated invasion of privacy." Thanks, Tony Tan · talk 04:24, 13 May 2020 (UTC)
    Tony Tan, it would be permissible, and no change is required. Per the GDPR and CCPA, the two major pieces of privacy legislation, data can be used for this purpose but must be appropriately disclosed in a privacy policy. On Wikimedia's privacy policy the use guidelines already state that personal information collected may be used "To fight spam, identity theft, malware and other kinds of abuse." ProcrastinatingReader (talk) 15:59, 16 June 2020 (UTC)
    @ProcrastinatingReader: I hope you don't aspire to be a lawyer. The relevant line is "As a result, some volunteers have access to certain Personal Information, and use of that Personal Information may not be governed by this Privacy Policy. Volunteers that have such access include: Administrative volunteers, such as CheckUsers or Stewards" which doesn't cover bots. I severely doubt the line you quoted would allow for dynamite fishing, which is basically what this is. - Alexis Jazz 11:54, 6 July 2020 (UTC)
    Alexis Jazz, I do quite a lot of work in my field with the GDPR. Wikipedia is a special case as a volunteer-style structure, and obviously statements made here don't mean anything compared to the WMF's advice, but the line in the terms I quoted is indeed relevant. The point of this proposal is automation to detect sockpuppeting, the bot part is merely an implementation detail. I don't think anyone would disagree if the WMF added this to the core software it would be covered under the existing policy.
This isn't dynamite fishing, proposals that are made here are done openly by other companies for the protection against abuse, and have been done for over a decade, and recent privacy legislation isn't meant to stop that. This project is just incredibly conservative against any form of privacy invasion, even IPs, but these examples are not top secret forms of personal information per privacy legislation.
The relevant question is then, can a bot hosted not by WMF also comply with the existing privacy policy? You've selectively quoted the line, it ends with saying When these administrators access Personal Information that is nonpublic, they are required to comply with our Access to Nonpublic Information Policy, as well as other, tool-specific policies. If, per your interpretation the privacy policy doesn't cover volunteers, it would obviously follow that usage of bots cannot violate said privacy policy. But this interpretation isn't correct. The WMF is the data controller for personal information such as IP addresses. The fact that they choose to make CheckUsers sign a separate agreement (which is between the WMF and CUs, not between the data subjects and CUs) doesn't change that fact. Them being volunteers doesn't suddenly give IP addresses a special status in privacy law where no entity is the controller of that information. WMF remains the controller, CUs are somewhere between agents and processors. The data is obviously governed by the privacy policy, since WMF is the data controller. Indeed, the sharing of data under the policy with community-elected individuals, including functionaries, is covered earlier in the policy.
This doesn't automatically mean that the bots are permissable, of course, just that they're not violations of the WMF privacy policy. They are probably violations of the CU access to non-public information agreement, which is between CUs and the WMF, so that may need amending to allow this change. ProcrastinatingReader (talk) 12:10, 6 July 2020 (UTC)
@ProcrastinatingReader: If the WMF implemented this in the core software there would still be a good number of questions depending on details, however, the proposal is to "set a bot to a task of reviewing", which doesn't suggest WMF. And in the analogy of WP:NOTFISHING, this is dynamite fishing. The privacy policy covers volunteers (checkusers are not paid), but there is a world of difference between the WMF allowing checkusers to query checkuser data of specific targets (which is clearly done in an effort to curb abuse) and dumping that information of everybody in a ZIP and mailing it to a bot operated by a volunteer. - Alexis Jazz 12:32, 6 July 2020 (UTC)
  • I agree with Suffusion of Yellow: what happened to WP:NOTFISHING? Double sharp (talk) 13:19, 17 May 2020 (UTC)
  • With the caveat that sockpupetry isn't an area I'm very familiar with,that's exactly what a sockpuppet would say... I'm persuaded by BD2412 and the others arguing that this isn't that different from research. With the data already on our servers, I don't see how hiding it from ourselves by disallowing a bot to run in this way would preserve privacy. If those of you opposed can come up with a realistic example of a way this could go wrong in practice and end up violating someone's privacy in a way that meaningfully harms them, I might be persuaded, but currently I'm drawing a blank on that. There are clearly technical aspects to be worked out, and that could get complicated especially if the privacy policy ends up being involved, but overall, when we're looking at a tool that could deliver probable cases of sockpuppetry to highly trusted editors able to investigate them, WP:NOTFISHING seems like a less relevant piece of guidance than WP:SUICIDEPACT. {{u|Sdkb}}talk 04:50, 21 May 2020 (UTC)
    • The whole point of WP:NOTFISHING is that CU can run into tons of false positives just by random chance even if the user is doing nothing wrong. It's not about assuming good faith, so WP:SUICIDEPACT makes no sense here. I would, in fact, prefer an absolute right to be free from unjustified search. Wug·a·po·des 20:50, 4 June 2020 (UTC)
  • I had a look at a recent suspected sockpuppet, and was able to match him to a blocked user using the type of approach a machine learning program might use. However the blocked user is someone we would really like to come back on a standard offer, and the sock had done no harm, so I did not share my suspicions. "First do no harm." A bot would not have this discretion. All the best: Rich Farmbrough 22:03, 30 May 2020 (UTC).
  • @BD2412, Suffusion of Yellow, TonyBallioni, and Pppery: I don't know how many of you are aware but the WMF is currently running research projects to detect sockpuppets without any private information, probably NLP, sentiment analysis and the likes. I came across the project quite accidentally but here's the link: meta:Research:Sockpuppet detection in Wikimedia_projects. I'm quite sure that with a big dataset like the one at SPI, it'll be quite easy to detect sockpuppets of some prolific masters. --qedk (t c) 12:47, 2 June 2020 (UTC)
QEDK, The dataset isn't actually that large. I looked into some of this stuff last winter. I found 22,618 SPI archives, containing a total of 105,426 blocked socks (as of 12 December 2019). As machine learning corpora go, that's not a huge amount of data. -- RoySmith (talk) 13:39, 13 June 2020 (UTC)
  • I don't suppose it would be possible to add keylogging to the arsenal of detection tools? The keystrokes that an individual uses to type words are, with enough samples, like a fingerprint. BD2412 T 15:22, 2 June 2020 (UTC)
    • It is possible for browsers to use JavaScript/jQuery (or even CSS) to log keystrokes but let me begin by telling you abundantly how terrible of an idea it is - and a lot of websites do this to collect data (called session replay) and it's simply unethical, especially in some cases the information you reveal is available in plaintext to third-party analytics providers, there's no reason ever for Wikipedia to do this. Imagine you're editing Wikipedia and have your bank login page open and type your password accidentally without seeing the tab you are on, your personal information would automatically be logged and someone with access to it could be compromised, or let's say if the server gets compromised or an unpatched MITM attack is used, making it a very, very vulnerable attack vector. --qedk (t c) 16:20, 2 June 2020 (UTC)
      • I find it hard to imagine accidentally typing a bank password into Wikipedia. However, my intent is not to record what people type, but how they type. That is the distinguishable characteristic. BD2412 T 16:24, 2 June 2020 (UTC)
    • Now you're just fucking with us. - Alexis Jazz 11:54, 6 July 2020 (UTC)
Would the source code be public, like most things here, or private, like AbuseFilters? >>BEANS X2t 14:32, 4 June 2020 (UTC)
  • Support - but we're liable to lose a whole lot of editors and maybe even a few admins. ^_^ Atsme Talk 📧 14:38, 4 June 2020 (UTC)
  • I'd support this, provided the source code is reviewable by BAG and checkusers and similar. Basically automated detection of suspicious patterns to be flagged for CU review. Would be a great way to get rid of a bunch of PAID editors and similar. Headbomb {t · c · p · b} 20:12, 4 June 2020 (UTC)
    • Source code makes virtually no difference here. It's like open source for electronic voting machines. They're still shit. They will always be shit. They will never be not shit. - Alexis Jazz 11:54, 6 July 2020 (UTC)
  • Oppose because now there's talk of running keyloggers, and I don't trust that this will remain a restrained process if implemented. We're here to build an encyclopedia, not a police state. If you want to teach machines to detect sock puppets, use publicly available data like everyone else, but no one, whether personally or by both proxy, should be accessing user's personally identifiable information without cause. Wug·a·po·des 20:50, 4 June 2020 (UTC)
    We have plenty of cause - we have uncovered several large and sophisticated sockpuppet editing operations in the past few years, and there is really no question that there are others going on undetected right now. Also, this "talk of running keyloggers" of which you speak is basically one question that I asked. The proposal is for CU-style checks of relationships between accounts. BD2412 T 21:50, 4 June 2020 (UTC)
    But you understand how starting your proposal with "a massive invasion of privacy" and then later, off-handedly suggesting an even more invasive violation of privacy does not give me confidence that this will be restrained, yes? There's large and sophisticated drug cartels and other criminal enterprises operating undetected right now in many countries, that doesn't mean I want people opening all my mail to stop them. Like I said, use publicly available data if you care so much, but sorry, you've given me no reason to trust that you will respect user privacy. Wug·a·po·des 22:24, 4 June 2020 (UTC)
  • Oppose. Wikipedia is a fairly important institution. Its neutrality, as far as I can tell, comes in large part from the fact that editors have been able to expect a great deal of privacy (Wikipedia covers some pretty controversial subjects, as you may have noticed). Setting a precedent that things like editor IP addresses, login times, et cetera are freely used in the vague aim of "preventing abuse" opens up an unfathomably deep can of worms and degrades the trust of people who are, first and foremost, unpaid volunteers. The idea of using sentiment analysis on editors is already kind of creepy, and there's no need to top it off by destroying a very well-established standard of propriety built over the course of decades. I am pretty sure that if I lived under a regime where my Wikipedia editing constituted a criminal offense, I would be heading out the door and not looking back at the first sign of this proposal being enacted. { } 05:06, 9 June 2020 (UTC)
  • Support WMF funding community discussion The Wikimedia community must use automated tools to manage more of Wikimedia projects. This is absolutely necessary and there is no alternative. There are 100 automated tools available all of which can do different things, and this proposal is for one tool doing one of those things. For any given tool, it can operate in 100 different ways, and each of those ways makes the wiki community both more powerful and more vulnerable in different ways. We have to sort this out with conversation.
The Wikimedia movement brings in US$130 million/year growing at 10% a year. There is no shortage of funding, but there is a major shortage of community organization and leadership. Right now the default path is to use this money to fund internal private conversations at the WMF to sort this. They already employ software developers who are making tools like this a reality. Either the wiki community discusses this or otherwise this gets several million / year in WMF private investment until the WMF pops something out in the end. There is no option to not spend millions of dollars on this, or to not eventually implement this technology.
I think the Wikimedia community should advocate for funding to a mix of Wikimedia community organizations, universities, and nonprofit partners to support more community discussion. There are many, many social and technical issues to discuss. This goes far beyond one conversation on the village pump, and into massive global design of culture and society which requires a lot of input. If 10 universities in 10 countries each addressed one social challenge in this for 3 years, and each got US$100,000 from the WMF over that time period (meaning US$3 million for the project) then that would be the approximate scale of the response we need to start the conversation on safely developing automated tools for moderation.
This issue is far, far beyond what volunteers can crowdsource without expert partners and funded support. The money is available and I support anyone who would draft and propose any budget to address this problem anywhere in the US$500,000 - $5,000,000 range over 1-5 years. Also in general, I support more community proposals to speak up about spending more money to address big issues. Blue Rasberry (talk) 15:00, 13 June 2020 (UTC)
  • I support this and any proposal which makes sockpuppetry less feasible and thwarts them sooner. We know that editor retention is an issue, and I know for a fact from various discussions I've been in that the frustration of dealing with sockpuppets plays a role in driving away good editors. The proper analogy for Wikipedia is not that of a society (with rights like being free from 'unjustified search') but that of a private business (as BD2412 built his framing device around). Editing Wikipedia is not a human right but a privilege, and one that comes with limitations that one must follow. As noted above, the Wikimedia Privacy Policy does allow for use of data in this way. Of course, future discussion regarding exact criteria for the bot to flag accounts, and admins' human judgment before blocking, would ensure that legitimate alternate accounts or people sharing a household would not be blocked. But, sockpuppetry is far too common, it can seriously damage the encyclopedia by facilitating the addition of sneaky POV text, and more needs to be done to stop it. Crossroads -talk- 05:08, 19 June 2020 (UTC)
  • Oppose: Too many chances of false positives. Also, "actual instances of likely sockpuppetry" is a very contradictory and vague statement. I'm of the opinion that this whole anti-sock crusade is a rather uncertain affair. The more controversial the proposal, the better it is to reject it. HalfdanRagnarsson (talk) 16:11, 20 June 2020 (UTC)
  • Support This is merely a tool, one of many, that admins can use to determine sock puppetry .. The final sock descision is made by people, not bots. The more tools and data admins have available to make a better decisions, the better. If the admins make a bad call they are personally responsible, not the tool. Next time they might not use the tool again or give it less weight or learn which situations it tends to be right and wrong. It's self-correcting. I get the impression the Opposers believe the bot is a fully automated decision maker?! That would be crazy and is a strawman argument. We have lots of "self driving cars" that are not full autonomous eg. driver has to be watching the road behind the wheel etc. Computer automation is like that, on a spectrum not black and white. -- GreenC 17:33, 20 June 2020 (UTC)
  • Support in limited form. It could be something that automatically flagged multiple accounts from the same browser/IP or whatever data is available, and then showed this to admins only.
    • On the detection side, consider current SPIs that turn up master account with two or more socks. This kind of thing could be automatically detected and flagged, removing the current subjective decision-making required when deciding to run a CU. Someone files an SPI, the admin would see it, and then could request CU per "automated detection data" or something like that. There would be a clear reason for a human CU, based on the detection criteria. Think of this like an automated "stage 1" CU that only admins can see.
    • On the account creations side, are there really that many times where a single computer would create and operate more than three accounts in a week or a month? Account creations past a certain number could be flagged.
Automated detection of account similarities/anomalies is a good idea, and it could start out very conservatively: for example detecting multiple !votes from the same computer at AfD, and the creation of two or more accounts within a week from the same IP or computer.
Another idea might be to have the algorithm provide a "likely trouble" percentage. This might actually enhance privacy, as a low percentage would mean the human CU would decide not to run a check that reveals personal data. ThatMontrealIP (talk) 04:16, 22 June 2020 (UTC)
  • I don't see how this would be an additional invasion of privacy, every instance of an actual action being taken against someone would still be carried out by a human, this would simply sort the data to allow for more effective moderation. The implication you make however, that because it is carried out by a bot, it is impartial, is incorrect. Bots are created by people, it isn't easy to write an impartial algorithm for complex discrimination of data that concerns justice. — Preceding unsigned comment added by 75.158.150.208 (talkcontribs)
  • Oppose - all this will lead to is disruption, privacy violations and waste of checkuser time. Detecting a sockpuppet automatically would be very challenging, as there are always so many edge cases automating this process wouldn't be viable IMO. Ed6767 talk! 22:30, 26 June 2020 (UTC)
  • Oppose: I thought a troll had gotten into the village pump when I read the title of the post. It would just waste CheckUsers' time and create piracy concerns, and will be redundant because of WP:SPI. SuperGoose007 (Honk!) 00:59, 3 July 2020 (UTC)
  • Support, detected/suspected socks would still have an opportunity to respond prior to an any admin action. It seems to me that in my meagre six years here the problem has gotten much worse whilst the number of editors per article has dropped significantly, trying to combat the constant barrage from SPAs and SP IPs is time enough without having to then justify your suspicions of sockpuppetry at SPI. This would be a beneficial tool with realistically little chance of abuse as everything everyone does here is fully visible to someone. Cavalryman (talk) 01:27, 3 July 2020 (UTC).
    Support - ooops - at least I'm consistent. 09:18, 3 July 2020 (UTC) but here's my question...what if there's a conference and 2 people are sharing a room? What about edit-a-thons? What about a family that edits together, or a couple, or a neighbor decides they want to edit WP and comes over to learn? Does any of that matter? Atsme Talk 📧 01:37, 3 July 2020 (UTC)
    • I am presuming that a likelihood of shenanigans will be gauged based on behavioral cues (such as grouped voting on obscure AfDs, or unusual use of the same phrasing). I myself have edited while at Wikimania, and in a pinch even borrowed another Wikipedian's laptop to make some edits at the Wikimania in Italy. Of course, editors may be asked to explain the circumstances of apparent sockpuppetry, and an editor who is blocked can always appeal the block. BD2412 T 02:30, 3 July 2020 (UTC)
    • Atsme, If I am not mistaken CU info also includes browser type and version number, so that is something to distinguish computers. ThatMontrealIP (talk) 02:46, 3 July 2020 (UTC)
  • Support: As someone who has written many sockpuppet investigation reports (the latest on 21 June 2020) I think this is a great idea. It would save good editors a lot of time that is spent on relentlessly writing sockpuppet investigation reports. This saved time would be spent on creating good content. Also, I don't see how this supposed "invasion of privacy" would harm any honest editor. Tradediatalk 07:43, 6 July 2020 (UTC)
  • Oppose All the data needed to be reviewed to determine who is in fact engaged in sock puppetry is already stored in our servers, so why don't we set a bot to a task of reviewing all the edits A bot? Who's gonna run that? You? That'll make for an interesting target for 4chan. Me? Same, also, no fucking way. Any other volunteer? Same. Which is also only one of the reasons why Legal will shoot this attempt at dynamite fishing down before you'll ever get to the pond. So, WMF? That would require some severe changes to the privacy policy, not to mention require substantial developer time. Yeah.. probably not. Summoning Mdennis (WMF), JSutherland (WMF) and Jrogers (WMF). Humour us. - Alexis Jazz 11:54, 6 July 2020 (UTC)

Populate medical resources in disease articles with information from curated sources[edit]

Wikipedia is a valuable source of textual data in many areas, including disease understanding. In order to interoperate Wikipedia with other sources (Electronic Health Records, Biological databases, etc), disease articles must contain disease codes (aka medical resource). Currently many articles about disease contain either few or no medial resources, which hinders interoperatibility. My proposal is to use curated mapping sources to automatically populate the medical resources of disease articles in Wikipedia.Eduardo P. García del Valle (talk) 14:31, 23 May 2020 (UTC)

This sounds like an idea to discuss with WikiProject Medicine. Taking Coronavirus disease 2019 as an example, I see we already do list:
ICD-10: U07.1, U07.2 MeSH: C000657245 SNOMED CT: 840539006
Those values are easy to parse and taken from Wikidata. I have no idea how easy it is to look up a value in Wikidata (which links to the Wikipedia article), rather than having to start from a known topic-name to find the Wikidata entry. DMacks (talk) 14:45, 23 May 2020 (UTC)
I agree WikiData is a good source for disease codes. In many cases, it's more complete than the corresponding Wikipedia equivalent. However, WikiData is still incomplete. For instance,azotemia is missing the SNOMED CT code 445009001. Thus, it's necessary to consume other curated sources of mappings. --Eduardo P. García del Valle (talk) 22:06, 28 May 2020 (UTC)
User:Eduardo P. García del Valle, I hope that you will find friends at WT:WikiProject Medicine. Do you know how to add that information to Wikidata? If you add it there, it's available to all the Wikipedias – not just the English one. I think that User:RexxS is one of the best editors you could talk to about that. WhatamIdoing (talk) 21:20, 1 June 2020 (UTC)
@Eduardo P. García del Valle: it is not possible to directly insert content into a Wikipedia article from an arbitrary external source. That means that data must first be added to Wikidata in order to include it in Wikipedia. This has the advantage that all 300+ different language Wikipedias can make use of it, and many databases have been uploaded to Wikidata, which currently contains 87 million data items. The template {{medical resources}} is already coded to fetch some identifiers from Wikidata, including SNOMED CT, and User:Little pob was adding several more to the sandbox version for testing. That could be expanded as more data became available. --RexxS (talk) 00:00, 5 June 2020 (UTC)
The sandboxing was finished and was pulling WikiData properties as expected. IIRC, the changes were moved over; but then an issue was spotted, and so it was rolled back. The issue is that any sections headers made after the template are not shown. As the template is supposed to be located within the EL section, there shouldn't be any sections listed afterwards; but, given WP:IAR, it should be looked at. (NB this issue is in the current template too.) Little pob (talk) 11:09, 6 June 2020 (UTC)
@RexxS: How should I then contribute to WikiData and make sure the content is ultimatelly loaded into Wikipedia? Should I use WikiData API, and request a Wikidata bot? If this is the right way to contribute disease codes to Wikipedia, why is the input of this information not disabled when editing Wikipedia articles? I think there should be a bi-directional alignment of Wikipedia and Wikidata to keep both sources synched. Otherwise, anyone could add different codes via Wikipedia to those in Wikidata (which is happening currently).Eduardo P. García del Valle (talk) 15:55, 6 June 2020 (UTC)
@Eduardo P. García del Valle: I believe the normal way to import the contents of a database into Wikipedia is by using a bot, but that question would be better asked on Wikidata at d:Wikidata:Bot requests. The Wikipedia and Wikidata communities value their independence from each other to such an extent that the sort of synchronisation that you're looking for does not exist, sorry. I write code to import data from Wikidata into Wikipedia, so you can always ask me about a particular implementation that you're considering. --RexxS (talk) 18:41, 6 June 2020 (UTC)
@RexxS: Thank you very much for the information. I understand that there's no automated sync between Wikidata and Wikipedia for the sake of their independency. However, my goal is to ultimately contribute the missing disease codes to Wikipedia, and by doing it first into Wikidata there's no guarantee that they will get into Wikipedia. I proposed a Wikipedia bot (see Requests_for_approval/DismanetBot) but it require previous consensus. I'll explore the Wikidata bots, just in case, and get back to you if I have any questions. Thanks once again.Eduardo P. García del Valle (talk) 13:27, 13 June 2020 (UTC)
Currently many articles about disease contain either few or no medial resources As a clinical coder, this is something I'm happy to help with. My initial thought is to see if there is a way of subtracting one transclusion list from another. For example, subtract the list of articles that contain the {{medical resources}} template from the list of articles that contain {{infobox medical condition}}. But I don't know if we have a tool that can do this? (I am not watching this page, so please ping me if you want my attention.) Little pob (talk) 13:53, 7 June 2020 (UTC)
@Little pob: Thanks for your input. In an ongoing project at my University, we continuously scan Wikipedia to extract text and codes from disease articles (see DISNET Project). We have an accurate idea of the missing codes, and designed a bot (see Requests_for_approval/DismanetBot) to contribute codes extracted from external sources, including UMLS. Eduardo P. García del Valle (talk) 13:27, 13 June 2020 (UTC)
@Eduardo P. García del Valle: to be honest, I'm not familiar enough with bots to be comfortable offering a WP:!VOTE in support or opposition. To get more voices, if haven't already, can I suggest you put an WP:APPNOTE on WT:MED pointing to the discussion here? I'm going to courtesy ping @RexxS: to return to the discussion; as I know their preference is for centralising this information in one place, and WikiData already lists a large number of classification codes from various ICDs. I also want to highlight that the local version of ICD-10 is often just called "ICD-10" by coders – even when it has a local suffix (e.g. ICD-10-AM). So care must be taken, both with the source and essepcially the parameter the bot will be entering codes against. This is because there are often incompatibilities between the codesets in the WHO published version and the local version. This issue actually came up at WD, where ICD-10-CM properties were being added under the ICD-10 identifier (all fixed now). (I am not watching this page, so please ping me if you want my attention.) Little pob (talk) 12:17, 14 June 2020 (UTC)
@Little pob: Thanks for your valuable input. As suggested, I mentioned our discussion on WT:MED, to gather more opinions on this point. See [[1]].--Eduardo P. García del Valle (talk) 19:01, 20 June 2020 (UTC)
@Eduardo P. García del Valle: for some reason I didn't get a notification for the ping; but you're most welcome. Hopefully you'll get some inciteful input from the project members. Little pob (talk) 12:21, 30 June 2020 (UTC)
@Little pob: WP:PETSCAN is great for doing boolean category and template searches. For example, I wrote PSID 16645246 to do "article-namespace that has {{infobox medical condition}} but not {{medical resources}}" (currently 1850 results). DMacks (talk) 14:08, 29 June 2020 (UTC)
@DMacks: very useful tool, thanks. I hope to start my way down that list at some point soon. Though I will have to search the talk archives and checking if there's consensus before adding to the some of those articles (e.g. transvestic fetishism). Little pob (talk) 12:19, 30 June 2020 (UTC)

Left sidebar update follow-up[edit]

Wikipedia:Requests for comment/2020 left sidebar update was recently closed by Barkeep49 and DannyS712, and most of the results have been implemented. A huge thank-you to everyone who participated! Per the close, several items require follow-up due to low participation or lack of consensus. I am opening this discussion as a space for those discussions to take place. It is being transcluded to the WP:SIDEBAR20 page and will be moved there when it concludes. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)

Introduction page[edit]

The RfC found consensus to add an introductory page for new editors, but asked for further discussion on the details.

Link[edit]

Previously discussed options: Help:Introduction (5 !votes), Wikipedia:Contributing to Wikipedia (1 !vote), and Wikipedia:The Wikipedia Adventure (1 !vote)

  • Support Help:Introduction. To put it simply, this is our best introduction. It's where the deprecated Wikipedia:Introduction now redirects and was made the primary link in the standard welcome template. It covers all the basics without going into unnecessary detail. It is mobile-friendly and accessibility-compliant. It follows the usability best practice of splitting information into easily digestible bite-sized chunks, rather than a single overwhelming page (although it has an option to be viewed as such if one wants). It's the preferred choice of the WMF Growth Team's product manager. It's being actively maintained and is overall ready for inclusion on the sidebar. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
Support any page that is not Help:Introduction a huge 66 page tutorial that is not user friendly. Stats show us that almost no-one is clicking the non-action action buttons to learn more so why send even more there??? . The fact someone from the WMF with less then 350 edits and zero edits in the help namespace likes the page should be a big red flag...last thing we need is another non experienced WMF member telling us what is best.--Moxy 🍁 11:14, 12 June 2020 (UTC)
The WMF Growth Team is literally the team in charge of new user retention. They're not trying to force anything on us (I was the one who sought out their opinion), but I trust that they know what they're talking about when they say We think that help pages are better when they have a fewer number of links and options -- too many can be overwhelming. In that vein, I think that WP:Contributing to Wikipedia would likely overwhelm, and Help:Introduction would be better. As for the traffic stats, most people come to the page looking for help doing a specific thing and then click on the page relevant to that. Since there are 13 pages linked, of course none individually will be getting as much traffic as the portal. There's also the general 1% rule of the internet to consider. Even the custom-designed newcomer tasks feature only results in 9% of newcomers coming back after 3 days (compared to a baseline 4-5%), so keeping them around is a huge challenge. The stats for the Wikipedia Adventure are similar, and while we don't know how many people who visit WP:Contributing to Wikipedia actually read to the bottom of the page, my guess is that it's shockingly low. {{u|Sdkb}}talk 15:25, 12 June 2020 (UTC)
Yup same team that brought us VE.--Moxy 🍁 15:32, 12 June 2020 (UTC)
The team was formed in 2018, so no. {{u|Sdkb}}talk 15:58, 12 June 2020 (UTC)
I guess I should have been more specific..old timers will understand--Moxy 🍁 17:35, 12 June 2020 (UTC)
  • Support Help:Introduction. Although I don't disagree it is quite voluminous, it covers all the necessities in a fairly well-structured manner and I used it myself for getting started.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
  • Support Wikipedia:The Wikipedia Adventure as I did in the previous discussion. I'm aware of accessibility criticisms and I'm even aware of data that suggests TWA doesn't improve editor retention (though I can't find the place where I read that a few years ago). However, the purpose of the link should be to get people thinking about becoming an editor—it's before the retention period, the part where we need to show them something just interesting enough to get them to make an account. I don't want another dry link with 400 subpages, none of which actually give me something to start editing or give me enough information to have my first edit not be immediately reverted. Barring TWA, I don't believe we have a page suitable for purpose but I would support Help:Introduction as a second and support any other page as better than nothing. — Bilorv (Black Lives Matter) 13:21, 12 June 2020 (UTC)
  • Support Wikipedia:Contributing to Wikipedia Its a one page, one stop wonder. Already well trafficked, lots of videos (which new users are always asking for), and long enough to be actually useful. I would also support Help:Introduction to Wikipedia, but would strongly oppose just Help:Introduction, as it is full of ...meaningless links for newbies, and already confuses folks with the VE/source distinction, and there are plenty more useful pages. CaptainEek Edits Ho Cap'n! 14:39, 12 June 2020 (UTC)

Note: An editor has expressed a concern that editors have been canvassed to this discussion. (diff)

I informed all that were in prior discussions even the ones that like your page. If I missed any fell free to inform..--Moxy 🍁 17:35, 12 June 2020 (UTC)
With 50 views a day its clear most do not send people there. The Wikipedia:Adventure has more then a 50 percent drop in views by the second page....with a loss of 90 percent by page 3. Not as bad as Help:Introduction but close.--Moxy 🍁 17:42, 12 June 2020 (UTC)
  • Support Wikipedia Adventure as I think Bilorv makes the best case. As the most prominent link for readers, we want to convert them to editors as fast as possible. For all the problems of TWA, it's best feature is that it gets readers editing quickly without having to read an entire manual. We can fix the other problems as we go along, and with added urgency given its prominent placement. Support the others as better than nothing. Wug·a·po·des 19:18, 12 June 2020 (UTC)
  • Support Wikipedia:TWA. It's where I'd send new editors, without a doubt; it's clear, concise, to-the-point, and engaging as a tutorial of how the site works technically as well as in policy, working with both in a hands-on environment. This approach is well-tested on other sites - indeed, it's what the onboarding experience looks like on many popular social media platforms - and it works in keeping people engaged throughout, making people less likely to skip the "boring policy bits" because they're actively doing something. Naypta ☺ | ✉ talk page | 19:43, 12 June 2020 (UTC)
    @Naypta, Wugapodes, Interstellarity, and Bilorv: I tried TWA recently and had high hopes for it, since the graphics are definitely nice and the interactivity is a plus. But I came away from it concluding that there are just too many problems, and those problems are too hard to address given how rigidly it's built. To list them out:
    • The JavaScript that keeps track of where you are is very buggy; several times it lost my place and I had to go back to the start of the module. Every time that happens, it's a potential exit point for someone to decide to give up.
    • It displays terribly on mobile, which loses us half of all potential editors right off the bat (and probably more in developing areas).
    • It's not accessibility-compliant, which introduces further issues of discrimination.
    • It's longer than Help:Introduction without really covering anything important that H:I doesn't, and it doesn't touch on all the most important things right off the bat the way H:I does. I don't think most newer editors will have that much patience.
    • There's no instruction on VisualEditor, and while that may not be what we all use, for newer editors it's a very important transition aid.
    • The juvenile tone seems to be okay with some people but very off-putting to others. We can be friendly without being juvenile, and I think H:I strikes a better balance.
    Putting all those together, they add up to a dealbreaker for generalized use, and they would require a ton of work to solve. By comparison, expanding the sandbox elements for H:I into something more fully interactive would not be hard (I might work on it later today). {{u|Sdkb}}talk 20:34, 12 June 2020 (UTC)
    As I said in my original comment, I'm aware of its problems. These are all things that can (and should) be fixed. Despite that, the most important function of this link is getting readers to make an edit, not teaching them rules. For its problems, TWA's really good at that. Wug·a·po·des 20:47, 12 June 2020 (UTC)
    @Wugapodes: I agree on that point. I think the very best thing is to present newcomers with easy edits to make on Wikipedia itself, since it's infinitely more satisfying to make a live edit than one to a sandbox. That's what the Newcomer Tasks feature the WMF is developing will hopefully do extremely well, and we'll want to integrate it once it goes live. For some things, though, a quiz/sandbox environment is best. I've opened up a discussion at H:I and we'll work on adding more of those features; help is welcome from anyone who wants to contribute. Cheers, {{u|Sdkb}}talk 21:25, 12 June 2020 (UTC)refactored 22:42, 12 June 2020 (UTC) to link to discussion
    @Sdkb: I think the points you raise here are definitely important ones. It ought to be possible for an interface admin to add code to MediaWiki:Guidedtour-tour-twa1.js that would automatically redirect mobile users from TWA to H:I, and for accessibility, it might be a good idea to add markup to the top of the page offering H:I as a more accessible alternative. One other option might be to have a choice - something like the below:
    Welcome to Wikipedia!
    Would you like to read a short, accessible introduction to editing Wikipedia, or learn interactively how to edit Wikipedia by taking a tour of the site?
  • That could then redirect mobile users automatically to H:I as described above. Naypta ☺ | ✉ talk page | 21:22, 12 June 2020 (UTC)
  • Support Help:Introduction - TWA's format makes it difficult for a new editor to jump to exactly the information they need. Plus, the whole concept of an interactive "adventure" would be offputing and distracting for many newcomers. While Help:Introduction is far from perfect, it is clearly the better option, and it's a lot easier to iterate on and improve. - Axisixa T C 22:00, 12 June 2020 (UTC)
  • The proposals are dreadful—design-by-committee with every second word linked and pointless decorations. • Help:Introduction might be ok if each button led to a single page of information. However, few people want to dive into a labyrinth where you never know if you've missed vital points, and later you can never find anything you vaguely recall seeing. • WP:TWA would be suitable for, umm, certain levels of potential editors but a sidebar link should be for useful information you might want to see more than once. • Wikipedia:Contributing to Wikipedia is the best but has too much waffle. There should be a short page with core facts and many fewer links (something that can be searched after a first read), with links to the proposals here. Johnuniq (talk) 01:52, 13 June 2020 (UTC)
H:E is shorter then WP:CTW and just about additions ...not sure why so many think new editors will read over 50plus pages to learn anything considering the data we have about them ...odd very odd to me.-Moxy 🍁 12:23, 13 June 2020 (UTC)
  • Only support something that makes it clear in the first few sentences that Wikipedia is an encyclopedia based upon what reliable sources say about a subject and that editors' opinions and knowledge/expertise are not to be used. This could be followed by something short about reliable sources, being a mainstream encyclopedia and about original research. Doug Weller talk 11:15, 14 June 2020 (UTC)
  • Support Quickstart – not sure about anybody else, but I just try out my new phone, program, tv remote, without reading the manual, or only after briefly scanning it. Maybe later, after I can't turn the phone off, or find Netflix, will I go to the manual (and then, slightly annoyed that the user interface is so poorly designed, that it isn't self-evident). I support an introduction that can fit on 3/4 of a laptop page and takes about a minute to scan. As a new Wikipedia editor, I just want to edit something, anything, to see how it works, and then learn by doing; not spend time reading endless explanations, and trying to remember what I read forty pages ago, and whether that was more important. I'll get to reading all that later, after I've got some experience. Remember learning to ride a bike? The manual is for explaining how to adjust your seat height, attach a lamp, or change a tire; it's not about teaching you how to ride on two wheels. Mathglot (talk) 09:35, 16 June 2020 (UTC)
    @Mathglot: The first content page of Help:Introduction, Help:Introduction to Wikipedia, is basically what you're describing. {{u|Sdkb}}talk 10:22, 16 June 2020 (UTC)
  • Support Wikipedia Adventure great for younger editors. 104.249.229.201 (talk) has made no other edits. The preceding unsigned comment from a Canadian IP address was added at 05:50, 17 June 2020 (UTC).
  • Support Help:Introduction it is easy to use and looks good. --Tom (LT) (talk) 07:41, 21 June 2020 (UTC)
  • Update: Following up for those here who haven't clicked through to the Help:Introduction discussion, we've added a slew of interactive components, including custom sandboxes, quizzes, and invitations to make easy live edits to articles through tools like Citation Hunt. We're planning on adding a few more quizzes, and as mentioned above the interactivity will get a further boost once the new Growth Team features are implemented, but I hope the present efforts will be enough to satisfy the concerns of some of those who opted for TWA above and perhaps resolve the deadlock we seem to currently be at. {{u|Sdkb}}talk 06:11, 1 July 2020 (UTC)

Label and tooltip[edit]

Previously discussed label options: "Tutorial" and "Editing tutorial". Previously discussed tooltip option: "Learn how to edit Wikipedia".

  • Support Tutorial, with "Learn how to edit Wikipedia" as tooltip. The renaming of the section where this will presumably be located to "contributing" makes it clear that this is an editing tutorial, not a tutorial on how to read Wikipedia, so we should go with the shorter label for conciseness. No one has raised any concerns about or suggested any alternatives for the previously discussed tooltip. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
  • Support Tutorial, with "Learn how to edit Wikipedia" as tooltip per Sdkb's reasoning.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
    5225C (talkcontributions) 23:53, 12 June 2020 (UTC)
  • In the new condensed sidebar there's less chance of the link getting lost, but I'd still prefer something very in-your-face as a label. I like an idea alluded to by MMiller (WMF) here: "Start editing". Or "Learn to edit". (As a tooltip, "Learn how to edit Wikipedia" or similar would be fine.) As a last resort, I'd prefer "Editing tutorial" to "Tutorial". (Who reads the section title? People navigate much more non-linearly than that. I want to know what a link is the instant I look at it, no contextual clues needed.) — Bilorv (Black Lives Matter) 13:09, 12 June 2020 (UTC)
    I like "Learn to edit" a lot — it gives a nice call to action. "Start editing" would imply that you're making actual edits during the tutorial, which isn't the case apart from the sandboxes. {{u|Sdkb}}talk 16:33, 12 June 2020 (UTC)
  • Support - first preference: "Learn to edit", then "Tutorial". MER-C 16:56, 12 June 2020 (UTC)
  • Learn to edit for label since it's short and actionable; no strong opinions on the tool tip. Wug·a·po·des 19:20, 12 June 2020 (UTC)
  • Learn to edit per Wugapodes seems best to me. Tutorial isn't quite as clear - tutorial of what? Especially for an educational site, for people for whom English is not their native language, that could potentially lead to confusion. "Learn to edit" is clear in intent and action. For the tooltip, "Learn how to edit Wikipedia" seems good. Naypta ☺ | ✉ talk page | 19:43, 12 June 2020 (UTC)
  • Support "Learn to edit" with "Learn how to edit Wikipedia" as the tooltip.
    5225C (talkcontributions) 04:21, 14 June 2020 (UTC)
  • Learn to edit. Simple and straightforward --Tom (LT) (talk) 07:41, 21 June 2020 (UTC)

Positioning[edit]

Previously discussed option: Contribute section, just below Help.

  • Support previously discussed option. This seems like the logical placement, and no one has raised any concerns about it or suggested any alternative. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
  • Support placement below Help as the logical spot.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
  • Above help, as the first thing under "Contribute" should be something that leads me somewhere where I will learn to contribute. — Bilorv (Black Lives Matter) 13:22, 12 June 2020 (UTC)
  • Immediately below help, please. MER-C 16:57, 12 June 2020 (UTC)
  • Below help. --Tom (LT) (talk) 07:41, 21 June 2020 (UTC)

Current events tooltip[edit]

Previously discussed options: "Find background information on current events" (status quo, 0 !votes), "Articles related to current events" (2 !votes), and "Articles on current events" (1 !vote).

  • Support "Articles related to current events", since "on" would imply that we're writing news articles, which we're not, especially in cases like recent deaths. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
  • Support "Articles related to current events", per Sdkb and for clarity with WP:NOTNEWS.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
  • Support "Articles related to current events" as above. — Bilorv (Black Lives Matter) 13:23, 12 June 2020 (UTC)
  • Per above Wug·a·po·des 19:23, 12 June 2020 (UTC)
  • Comment as previous closer: As I had indicated to Sdkb when discussing the close previously their reducing discussion to a strict vote (as indicated in the summary introduction to this section) is not consistent with policy or practice. I would encourage anyone considering a close of this section to read the previous discussion - it's short so won't take long - rather than merely accepting the !vote summary produced here as a vote summary. Best, Barkeep49 (talk) 23:14, 15 June 2020 (UTC)
  • Articles related to current events. This is a very clear description of what the page contains. --Tom (LT) (talk) 07:41, 21 June 2020 (UTC)

Community portal tooltip[edit]

Previously discussed options: "About the project, what you can do, where to find things" (status quo, 0 !votes) and "The hub for editors" (2 !votes)

  • Support "The hub for editors". This concisely sums up the portal's role. {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
  • Weak support for "The hub for editors", since I have no viable alternative.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
  • Comment as previous closer: As I had indicated to Sdkb when discussing the close previously their reducing discussion to a strict vote (as indicated in the summary introduction to this section) is not consistent with policy or practice. I would encourage anyone considering a close of this section to read the previous discussion - it's short so won't take long - rather than merely accepting the !vote summary produced here as a vote summary. Best, Barkeep49 (talk) 23:15, 15 June 2020 (UTC)

Miscellaneous discussion[edit]

  • There were several other proposed tooltip changes that got very little discussion and closed as no consensus. I don't want to overwhelm this discussion by creating a section to follow up on each of them, but I'll just throw them out here, and if they turn out to be uncontroversial, perhaps we can find consensus to implement them. They are:
  • For Special pages, change from "A list of all special pages" to "List of automatically generated pages for specialist purposes"
  • For Page information, change from "More information about this page" to "Technical information about this page"
  • For View user groups, change from nothing to "view the permissions of this user" (for non-admins) and "manage the permissions of this user" (for admins)
How do those sound? {{u|Sdkb}}talk 06:47, 12 June 2020 (UTC)
  • The suggested extra wording for special pages and page information does not help, the original wording is good. For user groups, I don't see why different text for admin/non-admin users is needed, just "Permissions of this user" would be fine (that's all that is needed for a hint about what groups do). Johnuniq (talk) 07:27, 12 June 2020 (UTC)
  • I concur with Johnuniq regarding special pages. Although the current tooltip is pretty useless, the proposal seems far too long. I would instead propose "List of pages for specialised purposes" to cut out some unnecessary elaboration and to clarify that special pages are for particular uses, not particular users.
    I support the proposed changes for page information and user groups, they seem to clarify their respective purposes quite well.
    5225C (talkcontributions) 13:02, 12 June 2020 (UTC)
  • I'd like "List of system pages" for "Special pages", because that's effectively what it is. Support the other suggestions. Naypta ☺ | ✉ talk page | 19:46, 12 June 2020 (UTC)
"List of system pages" sounds good to me. {{u|Sdkb}}talk 06:31, 22 June 2020 (UTC)
  • Support all three: "A list of automatically generated pages for specialist purposes" ,"technical information about this page", and "view the permissions of this user." All three proposals make it more clear and are more accurate about what the targets do. In the past I have been confused by the titles and I think these tooltips would have helped. --Tom (LT) (talk) 07:41, 21 June 2020 (UTC)

Sidebar in mobile view[edit]

@Sdkb: All of the discussion on the sidebar has been about the sidebar in desktop view. I think we should also discuss how we can improve the sidebar in mobile view. Do you have any opinions on how we can do that? Interstellarity (talk) 20:49, 12 June 2020 (UTC)

Interstellarity, I think we could certainly have a discussion analogous to WP:SIDEBAR20 for mobile. My sense is that the WMF has been more heavily involved with that than they have been with the desktop view, so it might be good to start at the idea lab and research the background. There are also other discrepancies such as the fact that mobile makes it very easy to see a user's edit count whereas desktop mostly hides it; we could talk more about those at WP:Usability. {{u|Sdkb}}talk 21:32, 12 June 2020 (UTC)

Tidying up the sidebar[edit]

I recently had an edit request declined at MediaWiki_talk:Sidebar#Protected_edit_request_on_12_June_2020 that should tidy up the sidebar a little bit. I think it is best that we seek consensus for this change. Pinging @MSGJ: if he would like to comment. Interstellarity (talk) 11:08, 19 June 2020 (UTC)

Outlining a new usergroup?[edit]

It's yet again snowing in June and this won't be going anywhere... RandomCanadian (talk / contribs) 03:46, 5 July 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


I've laid the basis for this out on the Wikimedia Discord and seemed to get a mostly supportive result, so I'm laying out the foundation for this here. Its aim is to decrease the backlog many administrators face.

The proposal, in short, is that there should be a new usergroup on the English Wikipedia that should have the ability to delete and protect pages. This has been colloquially referred to as the Moderator.

This usergroup should be:

  1. Used for closing XfD's
  2. Monitor requests for page protection (I've been told that the backlog can be massive at some times)
  3. Able to access deleted revisions and deleted pages (but not delete revisions)
  4. Used for any other process that requires the protection and/or deletion of pages

This usergroup should not be:

  1. Solely used for countering page-move vandalism, although it is handy
  2. Have the ability to block a user or any other administrative actions besides protecting and deleting pages
  3. Able to edit fully-protected or template-protected pages.

I encourage all building of this idea. I'm confident this will work in practice, take for example the Articles for Creation reviwers, and the new page reviewers.

I've laid down 2 proposals to how the permissions will be given.

  • Proposal 1: Editors will have to make a WP:PERM-style request for permission.
  • Proposal 2: Editors will have to make a voting-based request (similar or the same to an RfA), mostly due to the tools given.

I encourage you to discuss the different features of the usergroup. I also understand that the tools mentioned can be powerful, and if implemented will most likely be a step up from the current editor usergroups, hence proposal 2. dibbydib boop or snoop 09:47, 15 June 2020 (UTC)

Responses (Outlining a new usergroup)[edit]

  • Support (Proposal 2) as nominator. dibbydib boop or snoop 09:48, 15 June 2020 (UTC)
  • Oppose both Missed seeing this on the discord, but this seems another admin-lite, but it hands over most of the big three in their entirety, so any vetting would need to be RfA level (in which case...do RfA). I've participated in idea-level discussion for an ultra-limited page protect right being unbundled, but this would be vastly too much, and I am very firmly against it. — Preceding unsigned comment added by Nosebagbear (talkcontribs) 10:22, 15 June 2020 (UTC)
  • Support (P1) ❯❯❯ S A H A 10:37, 15 June 2020 (UTC)
  • No. The differences between this and full adminship, as outlined above, are nonsensical. If you can be trusted to delete, you can be trusted to block - the rare high-profile case or boneheaded error aside, the average deletion is much, much more controversial than the average block. And if you fully protect a page, you have to take responsibility to respond to edit requests. —Cryptic 10:46, 15 June 2020 (UTC)
  • No, as seeing deleted revisions requires a community process such as RFA. This isn't my opinion, it is well established by the WMF and is not optional. They would still have to go through the same RFA process, the same one that admins go through, and if they go through all that, I sincerely doubt anyone is going to want fewer tools. No admin is compelled to use all the tools, btw. Dennis Brown - 19:26, 15 June 2020 (UTC)
  • Oppose. It would be better to fix RfA than to create admin-lite. {{u|Sdkb}}talk 19:37, 15 June 2020 (UTC)
  • Support(proposal 2) Tbiw (talk) 21:30, 15 June 2020 (UTC)
  • Oppose both These are both crucial admin-only tools that should not be unbundled. If you really need the delete button just go through an RfA. – John M Wolfson (talkcontribs) 23:39, 15 June 2020 (UTC)
  • Oppose viewdelete already requires RfA like review, so this is going to far. Possibly if they didn't need viewdelete and were going to have delete and protect (assuming editcascadeprotected noted below got split) only I'd reconsider. — xaosflux Talk 11:35, 16 June 2020 (UTC)
  • In a country we have 0resident,vice-president. They appoint many people to work with them. This idea is a reduction of work and people won't be fighting for adminship too much I love this idea I will call it superb . This will also give other people tasks to do people will love to do it.let make them get engage/contribution not only by wditng normal pages but be having a task to do.Tbiw (talk) 09:06, 17 June 2020 (UTC)
    I'm sure there are people who would love deleting and protecting pages, but the people who want to read about something that has been deleted or edit something that has been protected will certainly not be happy. These rights need just as much community trust as the whole admin toolset. Phil Bridger (talk) 09:55, 17 June 2020 (UTC)
  • Oppose. Proposal 1 isn't an option due to legal issues and I don't see a point in going thru an RfA(ish) but only getting couple of tools (instead of all of them).  Majavah talk · edits 09:14, 17 June 2020 (UTC)
  • Oppose both obviously (and to be honest, you lost any credibility with "I've laid the basis for this out on the Wikimedia Discord"). Proposal 1 is technically impossible and proposal 2 is totally pointless. ‑ Iridescent 13:54, 17 June 2020 (UTC)
  • Oppose both 1 for legal issues, 2 for being pointless. Cabayi (talk) 14:03, 17 June 2020 (UTC)
  • Oppose - Like others have said, proposal 1 is a bad idea and 2 is pointless. However, I'm not opposed to a non-admin user group that can temporarily semi-protect pages that are the target of rampant IP/new account vandalism. - ZLEA T\C 20:57, 20 June 2020 (UTC)
  • Note that many IPs and new accounts revert vandalism (I think most of my edits before I registered were such), and semi-protection would prevent them from doing so. Phil Bridger (talk) 21:16, 20 June 2020 (UTC)
Phil Bridger If a page is semi-protected, it likely would not be vandalized in the first place. - ZLEA T\C 00:14, 22 June 2020 (UTC)
  • Strongly oppose Page deletion should not be given to anyone who has not passed RfA, or some similar community process. DES (talk)DESiegel Contribs 22:51, 27 June 2020 (UTC)
  • Oppose both AfD closing that requires deletion also require the judgement of an admin. Page deletion is not that frequent of a need. I've never seen a delete-closed AfD not get deleted within minutes. G5 or speedy deletions sometimes take a few hours, but in the scope of a long-lived encyclopedia, it is not that big of a deal. Second, users who have not passed RfA should not be protecting pages. This strikes me overall as a solution for a nonexistent problem.ThatMontrealIP (talk) 23:05, 27 June 2020 (UTC)
  • Oppose both. While I like the proposal, it's unnecessary. If you need those sort of tools and are experienced enough to use them, you might as well put the effort in to an RfA. There shouldn't be a confusing "admin-lite" gap between normal users and sysops. Ed6767 talk! 23:17, 27 June 2020 (UTC)

Discussion (Outlining a new usergroup)[edit]

  • I don't see the point. If you have to go through an RFA or equivalent why would you do it for only half the toolset? Cabayi (talk) 10:13, 15 June 2020 (UTC)
  • (after edit conflict) Anyone who wants these rights should apply for adminship. I know that nearly everyone agrees that there are problems with WP:RFA discussions, but that doesn't mean that we should give most of the admin toolset to users without such a rigorous process. Phil Bridger (talk) 10:15, 15 June 2020 (UTC)
  • Proposal 1 is a variation of Wikipedia:Perennial proposals#Automatically grant adminship to users with a certain number of edits or time editing. Cabayi (talk) 10:19, 15 June 2020 (UTC)
  • Proposal 1 is a nonstarter. Legal has repeatedly vetoed past proposals to allow access to deleted content without the equivalent of an RFA. —Cryptic 10:39, 15 June 2020 (UTC)
  • If an editor has the ability to change protection level, but not to edit full-protected or template-protected pages, couldn't they just unprotect a page so that they can edit it, at which point they essentially already have the right? {{u|Sdkb}}talk 19:36, 15 June 2020 (UTC)
  • Just as a note, I don't recall seeing any discussion of this on Discord, contrary to the opening line. --Izno (talk) 19:51, 15 June 2020 (UTC)
    It doesn't really matter whether this was discussed there or not. Decisions about what should happen on Wikipedia are made here on Wikipedia, and this is an obvious non-starter. At least this makes a change from the nonsense that I often see carried here from WP:IRC. Phil Bridger (talk) 20:07, 15 June 2020 (UTC)
    True also. I'm actually attempting to disavow any knowledge on the part of regular Discord participants as to this discussion. Don't want people knowing about the cabal. :) --Izno (talk) 22:17, 15 June 2020 (UTC)
  • An admistrator should be complete anyone we should ask them by this . This idea support is more than an oppose if we say vote count for usage this idea is in. It will assist administrators and prevent constant contesting for adminship anyone even not up to.Tbiw (talk) 21:34, 15 June 2020 (UTC)
  • Note: this would need phab:T71607 to occur (split protect and editcascadeprotected) to occur first. — xaosflux Talk 11:33, 16 June 2020 (UTC)
  • I think what can make people accept this proposal is to defend it more and respond early to their comment and opinion.Tbiw (talk) 21:01, 21 June 2020 (UTC)
  • It's likely Option 1 for this proposal is likely never going to be accepted. From here:
"...at this point, the Wikimedia Foundation will not endorse this suggestion and will not implement it in the unlikely event that it should reach consensus. For legal reasons, we require RFA or an RFA-identical process for access to certain tools (deleted revisions among them)."
Ed6767 talk! 23:27, 27 June 2020 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Get rid of stub tags[edit]

I know this is a bold proposal, and is likely to be controversial, but stub tags aren't useful. They don't get people to edit stub articles, which is their stated purpose. They do have a number of negatives: They often remain in articles long after they has been brought up to Start-class and higher. They conflict with the classes stated on talk page banners, which are often more up-to-date. They add a useless image and clutter to the articles. It's time to begin to think about eliminating them. For those people who might be feeling reticent, perhaps an experiment should be run where stub tags are removed from a random subset of articles, then they are compared in, say, one year's time, to a subset of similar articles and measured for levels of (destubifying) improvements. Any thoughts? Abductive (reasoning) 00:35, 21 June 2020 (UTC)

I think you have actually identified two distinct issues here: 1) we don't have any evidence that stub tags are achieving their stated purpose, and 2) stub tags are conflicting with WikiProject assessments. For #1, I think an experiment could yield from interesting results. For #2, I think this would make a nice bot task to remove stub tags when a WikiProject assessment is upgraded from stub. It might be worth upgrading {{WPBannerMeta}} to generate a reassess flag for the bot to add when a stub tag is removed, but a WikiProject still has it assessed as a stub. For the experiment on #1, I would suggest:
  1. Get a list of something like 10,000 random articles, and remove any that are not assessed as a stub by WikiProjects, or have neither a stub tag nor a WikiProject banner on its talk page.
  2. Do an assessment of the remaining articles and remove any that aren't actually stubs.
  3. Split WikiProject claimed articles by stub-tagged or not, and remove/add stub tags to get each WikiProject's article count roughly even (shoot for at least 1/3 of each). Remove stub tags from the unclaimed articles to even out the total count, again going no less than 1/3 for either group.
  4. For the two groups, drop the top and bottom (quartile? ⅒th?) in terms of edit size, ignoring any bot edits, and compare the two groups by interest (number of edits by unique editors), engagement (number of non-revert, non-patrolling edits by registered editors), and overall article change (total prose added to article during experiment run time). VanIsaacWScont 14:17, 21 June 2020 (UTC)
Thanks for your input. Abductive (reasoning) 23:07, 21 June 2020 (UTC)
  • Oppose The OP doesn't present any evidence to support the proposal. Me, I'd rather eliminate project templates and assessments as I've never seen these do any good and they tend to stick around for longer. The stub templates are comparatively unobtrusive and have an encouraging and pleasant tone. Andrew🐉(talk) 22:47, 21 June 2020 (UTC)
So, I present no evidence, then you say talk page assessments don't do any good, and also present no evidence. Abductive (reasoning) 23:07, 21 June 2020 (UTC)
This is not my proposal and so it's not my responsibility to be providing evidence. But here's a couple of examples – both being articles that I created recently. Firstly, consider Ambarnaya. I created this with the {{river-stub}} tag. User:Catan0nymous added two more stub tags: {{Russia-stub}}, {{Siberia-stub}} and user:Abune then consolidated the stubs into {{Russia-river-stub}} and {{Siberia-stub}}. These tags seem to have three functions:
  1. Placing the article into the categories: Category:Siberia geography stubs and Category:Russia river stubs
  2. Displaying some appropriate icons -- maps of Russia and Siberia
  3. Advising the reader that the article is just a start and inviting them to help expand it
My second example is List of longest-running radio programmes. In that case, I started out with the {{under construction}} tag because the page needed a good structure as a foundation and I wasn't sure what would be best. Once that was done, I removed the tag. By that time, the list structure was established and I used the {{dynamic list}} tag to indicated that the list was quite open-ended. That tag also invites the reader to add sourced entries and so I didn't see the need for a stub tag too.
Neither of these cases indicate that stub tags are a problem that needs fixing. Sensible editors use them as they see fit and they don't seem to cause any trouble. One feature which helps is that, by convention, they are placed of the foot of an article, where they don't get in the way.
Andrew🐉(talk) 08:00, 22 June 2020 (UTC)
  • Andrew Davidson, I agree with the assessment system. It's absurd. Articles can be Stub, Start, C, B, GA, A, or FA. That's seven levels, which is absurd. The gradations are way to small, and the assessment criteria way to subjective. Class A articles are "Very useful to readers", but GA are, "Useful to nearly all readers". That's absurd. -- RoySmith (talk) 02:15, 22 June 2020 (UTC)
  • Comment I doubt the suggested experiment would produce any clear results frankly. People are often reluctant to remove stub tags, or simply don't notice them. The wikiproject tags are just as prone to under-rate as the ones on the article in my experience. What might be more useful is a list of articles tagged as stubs where the article is over a certain size (not sure what). Reviewing these would I expect show most can be removed. Of course people still have to do this. Johnbod (talk) 23:59, 21 June 2020 (UTC)
    • The issue isn't so much as picking the certain size, it's defining what size is. The easy way out is to count the characters of wikitext, and you end up with quarry:query/46032. But plenty of those only have a half dozen or fewer sentences, despite their absolute length. —Cryptic 01:22, 22 June 2020 (UTC)
Like Negombo_Polling_Division - but that has massive tables, no doubt like the other Sri Lankan electoral districts, & certainly isn't a stub. In fact I'm pretty sure most of these are mainly tables. But thanks. Johnbod (talk) 02:44, 22 June 2020 (UTC)
  • Do you know of any utilities for querying the size of the prose text? VanIsaacWScont 02:06, 22 June 2020 (UTC)

Considering I've run contests with over 1000 articles destubbed directly from stub categories like WP:The African Destubathon and Wikipedia:The Great Britain/Ireland Destubathon and am running Wikipedia:The 50,000 Challenge which directly feeds off stub tags this is one of the dumbest, most ignorant proposals I've seen for some time and that's saying something!! There is an issue with updating the project tags once an article is no longer a stub and stub tags remaining in articles not stubs but that's hardly a reason to get rid of them entirely. Rosiestep and myself proposed a bot to sort that out but the Wikipedia community being the divided way they are wouldn't get us the consensus we need to sort it out.† Encyclopædius 08:19, 22 June 2020 (UTC)

I don't see why, User:Encyclopædius. I won some sort of prize in one of your excellent contests, but when looking for articles to improve, I remember just removing the stub tag on about five for every one that actually was a stub. I don't agree with complete abolition, but they are in a serious mess & should be sorted out. Johnbod (talk) 13:58, 22 June 2020 (UTC)
Yup, and can you believe when I proposed a bot to control the inconsistency and remove stub tags from articles with over 1.5 kb readable prose and update the talk page tags some people opposed? Andrew Davidson for a start. † Encyclopædius 14:04, 22 June 2020 (UTC)
Can a bot at least go around and remove stub tags from all really huge articles that have talk page templates that claim Start-class or above? Then maybe work its way down to smaller articles until it on the verge of making errors? Abductive (reasoning) 04:18, 24 June 2020 (UTC)
  • "stubs" help navigate articles that need attention. They typically are located at the very bottom of article and do not interfere with regular reader who has no intention to improve article. Basically nothing is wrong with "stubs" as far as I'm concerned. User:Abune (talk) 13:04, 22 June 2020 (UTC)
Very true, User:Abune. Readers rarely notice them, and even if they do, it is hard to see that they have any negative impact. Some editors find them useful. What's the problem? (A: there isn't one). Edwardx (talk) 14:34, 22 June 2020 (UTC)
  • Qualified support. I generally agree with the issues raised by the proposer. Stub tags are not particularly aesthetically pleasing, and do tend to linger after the article has been expanded to the point where they are no longer appropriate. Efforts to just find and remove them at that point become busywork. I am wondering if there is some template magic that can be applied so that stub tags on articles that are likely not stubs can turn invisible, and just show up as categories. BD2412 T 15:11, 22 June 2020 (UTC)
  • Simpler solution: If you come across an article that you think is no longer a stub... remove the tag. Problem solved. Blueboar (talk) 15:19, 22 June 2020 (UTC)
Not really - pretty much no one who would know how to do this ever looks at these articles, all xxx,xxx of them (what is the number, does anyone know?). Johnbod (talk) 21:44, 22 June 2020 (UTC)
There are 3,399,601 stubs as of now. You can see the stats here. TryKid[dubiousdiscuss] 23:28, 22 June 2020 (UTC)
Yikes! Over half our articles. These are project ratings though. I see 4,310 "top importance" stubs! Thanks, Johnbod (talk) 01:28, 23 June 2020 (UTC)
I'd like to see one of those top importance stubs. To make a start I followed TryKid's link to find Category:Stub-Class_articles. From this, I selected Category:Stub-Class Accounting articles because there is a systemic bias against business articles. Accounting standard looked promising and I found that this is assessed as High importance but Stub class. This is all coming from the project template as the article page doesn't have any tags. It could use some because I immediately noticed a blatant howler, "Accounting standards were largely written in the early 21st century." What I also notice is that while its talk page only had 7 readers in the last 30 days, the article itself had 2,481. I could now tag-bomb the article but what it really needs is improvement... Andrew🐉(talk) 08:55, 23 June 2020 (UTC)
Yeah, 3M project rating stubs. The number of "tagged" stubs seems to be 2,265,086, from Category:All stub articles. This number looks more correct since many of the "top importance stubs" aren't stubs anymore and are already detagged but the talk page wasn't updated. As previously pointed out, even many of the "tagged" stubs aren't stubs. Maybe a bot that automatically upgrades project rating of stubs to "start" if a tag isn't found on the main page is needed. Blofeld's idea of automatic detagging if the article is above a certain size was also good. TryKid[dubiousdiscuss] 12:59, 23 June 2020 (UTC)
  • Support In my early (2005) days, stub-sorting was one of my favorite hobbies, and it saddens me to finally deprecate the stub templates, but nowadays they are duplicative of the assessment templates on the talk page and thus unnecessary. -- King of ♥ 01:58, 23 June 2020 (UTC)
  • Possible alternative, I lean towards oppose but perhaps we could link the stub tags to the WikiProject banner, if the article is assessed as a Stub class a bot adds the tag, if (hopefully when) the article is upgraded to Start or higher a bot removes the tag. Cavalryman (talk) 02:19, 23 June 2020 (UTC).
    • This is a very good idea. It fixes the discrepancy between main page and talk, while keeping some level of friendly encouragement at the main page. - Nabla (talk) 14:52, 27 June 2020 (UTC)
  • Oppose While I agree that many of the problems you listed are real and affect Wikipedia, stub tags are necessary. They may not be very effective at getting readers/editors to add content to or destubify articles, but they make a vital contrast between what is and is not a reasonable length for an encyclopedic article. Most readers don’t care to browse Wikipedia’s myriad informational pages on article length, the different classes, prose, etc. Stub tags are simple, easily understood, and to-the-point. If we’re going to get rid of stub tags, why not just get rid of the stub classification altogether? It doesn't make sense. Improvements should be made, but we need them. Their importance to the project overrides any negative aspects. MrSwagger21 (talk) 02:26, 23 June 2020 (UTC)
  • Oppose per MrSwagger21. - ZLEA T\C 02:52, 23 June 2020 (UTC)
  • Comment on balancing editor and reader needs. Regarding editor needs (which we always tend to overprioritize, given the systemic bias introduced by who we are), my takeaway here is that it seems there's no evidence that the tags draw editors, and while it's perfectly plausible they do, this might be a good thing for someone to do a research study on. Regarding reader needs (which I don't see really getting proper attention here), the way we indicate article quality is a little quirky — we indicate GA/FA with a topicon, but stubs with a tag at the bottom, and nothing in between. I think there's a reasonable case to be made that stubs, given their low quality, ought to have the tags as a sort of warning. The counterargument would be WP:NODISCLAIMERS and the fact that there's a distinction between low quality and just short, and most stubs in my anecdotal experience are not lower quality than start/C class pages, just shorter. I'm not sure where I land on the necessity of stub tags, but I hope we'll consider how they impact the experience of non-editing readers, not just editors. I have brought up before that there is room for improvement in how we present content assessments to readers more generally (particularly, the difference between GAs and FAs is not made clear), and I'd like to see more work in that area. {{u|Sdkb}}talk 07:53, 23 June 2020 (UTC)
  • Oppose no. They are helpful and low-key. Not in anybody's way. Regards, GenQuest "Talk to Me" 10:47, 23 June 2020 (UTC)
  • Comment: sorry for the somewhat self-promo, but this is something that could hopefully be done with relatively little consternation were my proposal for an extension that adds categories from tags on the talk page successful. Such a move would allow the pages to retain the stub categories, without the duplication of effort in tagging the article with stub tags and also doing the assessments on the talk page. Naypta ☺ | ✉ talk page | 13:07, 23 June 2020 (UTC)
  • weak oppose for now - People I talk to (non editors) are often not aware a talk page even exists for an article. If a stub tag encourages the occasional person to add content then I see that as a Good Thing. If there were some way of showing that there was no evidence that this occurs, then I'd support their deprecation. I should add that the proposal was worth bringing up and I did consider supporting, and I do think the topic is worth exploring. Cas Liber (talk · contribs) 04:54, 24 June 2020 (UTC)
  • Conditional support. Stub templates are messy and outdated, and does not do what they are supposed to do, although they have other important uses. I suggest removing the templates, but replacing them with categories or a function within WikiProject banners. Rehman 05:21, 24 June 2020 (UTC)
  • Support - simplification is good. What stubs do can/is/should be done in a talk page banner, e.g. wikiproject assessments. Bottom line is we should have one place where we record what state an article is in, and that one place should probably be in a talk page banner. Levivich[dubiousdiscuss] 19:26, 26 June 2020 (UTC)
  • Oppose per Cas Liber. Stub tags are at worst benign. They're the literal bottom of the article and if a reader wants to ignore it, they can. I really can't understand how they can be seen as harmful. On the other hand, if they get 1 reader to help expand an article, the encyclopedia benefits greatly at pretty much no cost. They're a clear net positive. Putting them on the talk page may be convenient for us and our internal categorizations, but that's counterproductive for article improvement. Most people don't know about talk pages (seriously, ask your friends the last time they looked at an article talk page), so hiding the stub tags where only insiders will find them is in my mind a net negative. Wug·a·po·des 21:37, 27 June 2020 (UTC)
  • Oppose. Stub templates are a type of maintenance template. Because of their ubiquity, we've decided to put them at the bottom instead of the top of the page. They share all the pros and cons of other maintenance templates. One one hand, we think that – given the dynamic nature of Wikipedia – we should alert readers and editors if something is not right with an article. On the other hand, we don't know if more time is spent tagging or actually fixing articles. The main problem I see with stub templates is what others have pointed above. We have two systems to mark an artcle as a stub: stub templates and talk page banners. These are not always in synch. When this happens, the fault is with the editor who (de)stubs an article using only one method. The guideline at WP:DESTUB says to do it using both. I'm sure we're moving toward automatic article assesment (WP:ORES) at some pace. In the meantime, we should automatize getting rid of the discrepancies between stub templates and talk page assesments using a bot. – Finnusertop (talkcontribs) 23:20, 27 June 2020 (UTC)
"we don't know if more time is spent tagging or actually fixing articles" Probably the first one, in my considered opinion. RandomCanadian (talk / contribs) 03:43, 5 July 2020 (UTC)
  • Support A good solution, as already proposed by others, would be for categorisation via wikiproject banners on the talk page. RandomCanadian (talk / contribs) 03:43, 5 July 2020 (UTC)
  • Oppose I'm not joking in saying that this is normally how I find GAs to write. Stub tags are invaluable. TonyBallioni (talk) 02:17, 6 July 2020 (UTC)
  • Oppose I hate creating articles (for some reason), I don't think I've created an article yet. I do help massively improve existing articles, and have plans to edit a couple of stub articles and make them more full. Stubs have their benefits, help find topics that may look interesting, and give one a chance to expand them. Also, what Tony said above. ProcrastinatingReader (talk) 12:23, 6 July 2020 (UTC)

Proposal regarding a user script approval process[edit]

Regarding Wikipedia:User Scripts

This is a bit technical, so I'll try to keep jargon down as much as I can.

People have said that if MediaWiki was released today, user scripting wouldn't be integrated due to the security risks. But as of right now, user scripting is here to stay, and is a useful feature, but it still carries these security risks.

I'm the developer of a user script and the fact that my script only faced mainstream scrutiny from administrators after it was used by over a hundred editors is concerning to me. As one of the biggest websites in the world, if Wikipedia were to come under attack, and they went for user scripts that are used by thousands of editors, the potential for disruption would be huge, not to mention the potential leaking of emails and IP addresses. Quite frankly, in this day and age of cyber-attacks, it is more of a question of not if, but when this will happen - so mitigating the risk of this would be very important for the future of Wikipedia's security. If a user script got a users tokens or cookies and sent them to a remote server, then an attacker could potentially completely control their account like some sort of digital voodoo doll.

VPT works well for those with concerns, but some people might not think twice, especially with things like Enterprisey's script installer making installation one or two clicks, with a warning that many users may just not bother to read. MfD is slow, and CSD does work, but both again require a user with technical knowledge reading it and knowing what's wrong, or it being too late by the time it is finally noticed.

Other attacks may be, for example, a user creating a script under one premise so that it can pass through the approval process, then changing it maliciously, hence proposal 6 below where the changes would also be reviewed. An approval process, such as I'm proposing, would also help scripts remain within Wikimedia's privacy policy and terms of service. It would also be possible to help the script developer in terms of issues in their code making user script development more in line with Wikipedia's collaborative ethos.


So this is what I propose:

  • 1: The creation of a user-right and/or board of experienced and trusted script editors that can approve a user scripts either via consensus or at the discretion of a reviewer.
  • 2: The creation of an edit filter that would prevent or warn other users from importing another users user script that had not been approved. This would be edited each time a script is approved and explicitly white-listed in the filter, although, there is likely a better way to do this.
  • 2A: If possible, the filter will only apply to user scripts made following the creation of the approval process, as scripts by inactive users are still very much in use.
  • 3: The approval process would require open source code that would be reviewed for potentially malicious code, or other issues.
  • 4: Requiring all user script developers to enable or request two-factor authentication for security reasons, except in exceptional circumstances.
  • 5: The creation of a feed or mailing list to alert trusted script editors to the changes within these trusted scripts, so if a malicious change is made, it can be reverted or deleted so disruption would be minimal. I am very willing to set up something like this as a bot or in Wikimedia Toolforge.
  • 6: The setting up of a more clear policy regarding user scripts on what is and what isn't allowed.


For a more secure "nuclear option":

As these proposals are more software based, there would need to be consensus at the Phabricator too.

  • N1: Potentially implementing pending-changes like restrictions to all or most user scripts, where a trusted script editor would approve the changes, especially when they could effect hundreds of users.
  • N2: In the cases of a script being edited by a trusted script editor themselves, for security reasons, they wouldn't be able to accept these pending changes. Another script editor would need to approve this.

Of course, it is very important that this process is not overly restrictive as Wikipedia is a community lead project, and preventing potentially game-changing scripts from getting through the process would be a let down. The main purpose of this proposal is to mitigate the risk of the user script feature from being used maliciously. Any established board would be created in the goal to prevent malicious scripts and ensure the security, privacy and safety of their users.

Thanks all for your consideration, and feel free to ask if you need any clarification regarding my proposal. Ed6767 talk! 02:15, 26 June 2020 (UTC)

TLDR: User scripts are risky and carry many security risks, and right now, there's no approval or monitoring process I'm proposing the creation of these. Ed6767 talk! 02:23, 26 June 2020 (UTC)
TLDR this is a hard problem. :) phab:T71445 --Izno (talk) 02:34, 26 June 2020 (UTC)
Izno, indeed it is. it's a shame that discussion in the Phabricator regarding this has stagnated - but I think also opinions from non-technical users would be important, hence my proposal here. Perhaps if this works out, it could lead the way for other WMF wikis? Ed6767 talk! 02:40, 26 June 2020 (UTC)
I don't see what the problem with the current system is: any user who installs a user script is implicitly trusting the author of that script not to to bad things. This is even mentioned in MediaWiki:Userjsdangerous ("If you import a script from another page with "importScript" or "iusc", take note that this causes you to dynamically load a remote script, which could be changed by others."). * Pppery * it has begun... 02:54, 26 June 2020 (UTC)
Pppery, but what if that user's account is compromised, or a malicious user fools people into trust? Warnings don't hold much weight now and often go ignored, and that's the issue. Ed6767 talk! 03:02, 26 June 2020 (UTC)
A simpler suggestion: MediaWiki namespace is where trusted scripts (gadgets and their dependencies) already live. And they can already only be edited by trusted WP:INTADMINs. A simpler solution could be to store non-gadget scripts in that namespace, requiring an IntAdmin to approve edit requests to create/update scripts. Userspace scripts would then just be for personal scripts and testing, and on your skin/common js pages a big warning could be displayed if you are importing scripts from userspace rather than the MediaWiki namespace (i.e. via a hidden on-by-default gadget). - Evad37 [talk] 03:15, 26 June 2020 (UTC)
I like the spirit of the idea, though I think the process could use some tweaking such as Evad37's suggestion. User scripts pose threats that many people don't realize and we should be more proactive in helping non-tech-savy users avoid making themselves vulnerable. Wug·a·po·des 21:29, 27 June 2020 (UTC)
  • not to mention the potential leaking of emails and IP addresses - didn't you argue at WP:AN that it's not possible for your script to have access to any of this info? Here you're arguing that malicious scripts can compromise sensitive information?
I agree with the spirit of your suggestions, but I'm not sure much of this is necessary yet. In response to the suggestions, some of the points here are contrary to what you argued at WP:AN, and I found the statements you made there to be quite accurate. e.g. If a user script got a users tokens or cookies and sent them to a remote server, then an attacker could potentially completely control their account like some sort of digital voodoo doll. -- HttpOnly cookies with a domain make this not possible directly, and I'm sure Wikipedia doesn't send cookies back in each request, and the userscripts don't run on login, so this can't be sidestepped via headers, hence I don't think there's any attack vector to compromise any of this info. Yes, usernames can be scraped off the page and sent to a foreign endpoint, which automatically gets your IP, but that's the closest to a threat in this regard I can think of, off the top of my head. As for the specific suggestions, I don't agree with a couple: (1) just have the BAG do it, rather than create yet another council. Script developers aren't approved by the community, whereas at least BAG is (mostly) ran by 'crats and admins who were approved by the community in their RfAs, and again to join BAG. For (3), this is going to cause backlogs; yes only diffs can just be analysed, but it's still going to slow down updates, especially for large scripts or large updates. N1/N2 can't happen, because FlaggedRevs (Pending Changes) is practically abandoned. Personally, I see this being a bit of a faff to enact, for something that isn't even a problem yet. Have there been any serious incidents with userscripts to make this approach worthwhile? I think your script was one of the only ones that was completely hosted off-site, the rest have their diffs here and it probably wouldn't take long for someone to realise one is scraping usernames and sending them off. ProcrastinatingReader (talk) 21:37, 27 June 2020 (UTC)
ProcrastinatingReader, I didn't say that "it is not possible for any script to access this information". My script didn't and never will have any code regarding obtaining this information implemented, therefore, it was not and never will be possible. But my script does not equal all scripts, so these risks still stand. If a malicious script is loaded on the client-side, malicious activity can occur in relation to that user's account and data. I like the idea of getting the BAG to do that, especially considering some intadmins are also in the BAG (relating to Evad37's idea) which may relieve backlogging. Ed6767 talk! 18:19, 28 June 2020 (UTC)
Ed6767, thanks for clarifying. When speaking technically, my interpretation of "is not possible" differs from and ignores intent of developer, and rather considers whether something is technically possible, so I'd automatically assumed you'd also used the term in this way. The main points in my previous response still stand though, in that much of this sensitive information is not accessible to a script due to security features in 'modern' browsers (ie practically every browser except old IE). With the workaround I mentioned, you could scrape IPs, and scripts aren't loaded on Special:Preferences so email can't be scraped. I still don't think it's worth adding so many hoops and a technical process for something that has not yet been abused. Such a bureaucratic process will discourage script developers in the future, and we should desire to do the opposite, as there are many scripts that are helpful (and many areas which need scripts to fix pain-points).
I agree with some less-strong suggestions to improve security in this area, e.g. all scripts should be hosted on-site (but may use 'popular' external assets, eg well-known scripts, from primary sources). That alone decreases the chance that a user can make malicious edits to a script without getting caught, as any changes will be permanently visible in a log. Perhaps a private edit filter could be used in future to flag potentially problematic lines, or foreign HTTP requests being made, which quickly and automatically brings attention to them. I think that would be sufficient, honestly. I'd call it "a necessary evil" if prior abuse could be shown, but it can't, and I think the aforementioned 2 ideas solve much of the main problems without adding the bureaucracy. ProcrastinatingReader (talk) 18:33, 28 June 2020 (UTC)
ProcrastinatingReader, I think at least 2FA should be imposed, or at least encouraged for userscript developers. Edit filters would be pointless as obfuscation would completely defeat them. And it is possible to obtain user info and emails without accessing Special:Preferences, see mw:API:Userinfo, here's an example, it takes very little code:
let api = new mw.Api(); api.get({ 		action: 'query', 		meta: 'userinfo', 		uiprop: 'email', 		format: 'json' 	}).done( data => { 	console.log( data ); } ); 
Returns (for me):
{     email: "ed6767wikixxxxxx"     emailauthenticated: "2020-04-30T22:55:41Z"     id: 00000     name: "Ed6767" } 
That's just email. Modern browsers can't save you from everything. If CORS headers did stop you, here, it wouldn't be too much of a complex matter to edit some page somewhere with the info, then get a scraper to jot it down. . Just because something hasn't happened yet, doesn't mean it won't. And on one of the most visited websites on the internet, it's not a matter of if, but when. It is a risk to non-tech savy users, and should be mitigated. Ed6767 talk! 18:54, 28 June 2020 (UTC)
Cookies are sent with each request, since that's the only way to maintain a session between requests (other than encoding the session information into the URL). Javascript programs running in the client can basically simulate any user interactions with the website (entering data and clicking, or using the Wikipedia APIs) and so are definitely risky. Evad37's suggestion is a good one in terms of being implementable immediately if desired, and still letting users create personal scripts without any new overhead. isaacl (talk) 19:00, 28 June 2020 (UTC)

This proposal strikes me as overkill: one person was unaware of the privacy implications of using a third party CDN, and is now on a crusade to try to lock down all scripts everywhere using increasingly unlikely attack scenarios and appeals to the supposed incompetence of users. Let's not. The solution to "accidental use of external CDNs leaking information" is mw:Requests for comment/Content-Security-Policy. The better solution to "someone might write a malicious script" is to warn users to prefer Gadgets (which already have many of the safeguards being called for above) unless they know what they're doing, but not to got to the extreme of trying to require all scripts be gadgets. Anomie 13:28, 29 June 2020 (UTC)

Going on this, I'm not opposed to having "community scripts" that we store in MediaWiki space for things too niche to be gadgets. — xaosflux Talk 19:26, 29 June 2020 (UTC)

Official social media links in Wikipedia articles.[edit]

Okay, northern hemisphere snowing in Junely. --Izno (talk) 20:38, 27 June 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Dear Wikipedia Team,

I would like to suggest Wikipedia changing its policy regarding links to OFFICIAL social media pages in Wikipedia articles (in the top-right infoboxes). Currently, those links are not encouraged or are sent to the external links section of a Wikipedia article. However, in my experience, OFFICIAL social media pages like Facebook, Twitter, LinkedIn and YouTube official pages are abundant, useful, and many times unique sources of information regarding the Wikipedia article subject (person, organization, enterprise or government). Thus, presenting those links in the top-right infoboxes signifies a large window of knowledge and perspective for Wikipedia users.

Additional details for this proposal:

- I want to clarify that I am only referring to the links of MAIN / OFFICIAL social media pages. I am NOT referring to links of general social media content.

- Official social media pages are mostly always linked to the "official website" of the subject. This makes them main reliable and verifiable sources of information.

- Many times they are the ONLY source of updated information about the subject.

- Regarding access to social networks: It is true that access is limited. However, since many times OFFICIAL social media pages are the ONLY source of updated information about the subject, they should be privileged in infoboxes.

Best regards,

Elizabeth Anne Villegas Hernandez México. — Preceding unsigned comment added by ElizabethAnneVH (talkcontribs)

  • Strong oppose - these belong in the external links section. Wikipedia shouldn't be treated like an official platform or have promotional links to social media pages in infoboxes. In my opinion, doing so would violate WP:SOAP, WP:NOTLINK and WP:ELNO, all policies that work well at present (proposed changes to which should go to WP:VPP). Most people also come to Wikipedia from a search engine, like Google (who has info cards linking to social media accounts) and social media accounts also appear very close to Wikipedia in search results. Ed6767 talk! 18:54, 27 June 2020 (UTC)
  • Template:Infobox_person includes a website parameter for 'official' websites. Its not unusual for youtubers/instagrammers or group entities like bands to only have an official social media account. If they have no other conventional official website, I am unaware of any policy that would prevent you from adding it there. If they do have a proper website that lists their social media accounts, then that will always be the primary website. WP:ELOFFICIAL is usually the guide for social media links in external links. If random actor has social media account, this should not be included in the external links per WP:ELNO. A prominant youtuber would however generally be allowed a link to their youtube channel as it satisfies ELOFFICIAL. You will have something approaching zero chance of getting the template Infobox_person to change its documention to allow more than one website, or consensus that social media accounts (beyond the above) be listed without thought. Only in death does duty end (talk) 19:04, 27 June 2020 (UTC)
  • I imagine this proposal will gain zero traction, but Strong oppose anyway. People can find those things through a search engine. Social media is typically self-published trivia and junk information. Giving it any higher prominence than is described above (WP:ELOFFICIAL) only serves to dilute the neutral article tone, free of interference by the subject, that we strive for. ThatMontrealIP (talk) 19:10, 27 June 2020 (UTC)
  • Strong oppose - the last thing we need to do is privilege self-published sources. We allow the subject one link to their official website, if they have one; but social media pages are where the subjects do their publicity, spin-doctoring, official party-line versions of things, etc. They are neither reliable nor verifiable. Subjects of articles and their publicists lie all the time, especially about themselves. We are not Entertainment Tonight, we have no desire to be a channel for the latest publicity stills and gossip-fodder; nor the political equivalents thereof. ----Orange Mike | Talk 19:14, 27 June 2020 (UTC)
  • Strong oppose. Wikipedia is not here to promote your social media. -- RoySmith (talk) 20:21, 27 June 2020 (UTC)
  • Absolutely not - They belong in the external links section and no valid reason has been provided as to why we should change that. –Davey2010Talk 20:33, 27 June 2020 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Supreme Court of Wikipedia[edit]

(non-admin closure) Think a WP:SNOW close. It's very unlikely there is going to be a consensus to implement this again, especially with such a poor proposal with little change, and original subsection title of "Votes" already shows a misunderstanding regarding how this process works. Ed6767 talk! 23:07, 29 June 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Idea[edit]

I'm re-proposing the WP:Supreme Court of Wikipedia for ArbCom and CU/OS desicion appeals and adding/removing local statuses that can't be handled by bureaucrats, so local stewards, chosen from Wikipedians with 5 years/100,000 edits experience and bureaucrats that sign a non-public info agreement, chosen by admin vote, excluding eligible Wikipedians. What do you think? — Preceding unsigned comment added by Another Wiki User the 2nd (talkcontribs)

Survey[edit]

No. The last thing we need is more bureaucracy. Go write an article or something and learn how Wikipedia actually works before suggesting multiple ideas which have been suggested (and declined) multiple times in the past.  Majavah talk · edits 18:39, 29 June 2020 (UTC)
Absolutely not. We don't need this. You're inventing a solution for something that isn't even a problem. For CU/OS issues there's also the Ombuds, whose job it is to oversee usage of the global CU/OS policy. There's no reason to create "local stewards" to add/remove CU/OS roles, stewards do the job just fine. And the idea of ArbCom appeals is pointless: you want to create a body of appointed individuals to hear appeals from an elected body? One area with drama is enough, we don't need to prolong the drama across two venues and for longer time. ProcrastinatingReader (talk) 19:27, 29 June 2020 (UTC)
No. As I have said elsewhere, you need to stop the proposals about changing the way Wikipedia works and edit some articles. Phil Bridger (talk) 20:55, 29 June 2020 (UTC)
Strong oppose. A solution for a non-problem and re-proposing a proposal from 14 years ago in this day and age without modernisation is almost certain to fail, or some are just never going to succeed. Ed6767 talk! 23:00, 29 June 2020 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Major overhaul to requests for adminship guidelines[edit]

This isn't a proposal. Take it to the idea lab. (non-admin closure){{u|Sdkb}}talk 19:49, 1 July 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

I think we need to change the way how we should nominate new administrators. For just the SECOND time this year, there is not a single request for adminship. 139.192.206.157 (talk) 12:50, 1 July 2020 (UTC)

There isn't really a specific proposal here, so this should be in the idea lab. ProcrastinatingReader (talk) 12:55, 1 July 2020 (UTC)
We seem to have about the same speed as last last year:Wikipedia:2019 requests for adminship/Wikipedia:2020 requests for adminship. Gråbergs Gråa Sång (talk) 15:28, 1 July 2020 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Open for signatures - Community open letter on renaming[edit]

Dear all,

There is now an open letter that requests a pause to renaming activities being pursued by the Wikimedia Foundation 2030 Brand Project. Individual editors and affiliates (via their designated representative) can sign with their logged in account to show support.

The letter focuses on concerns about the current process, and not about specific naming choices. Affiliates and individuals that have signed have a diversity of views on specific names, but all are committed to a better, more community-driven process.

There is also currently a branding survey that runs until July 7. There is concern that the consultation process and options on the survey do not adequately reflect community sentiment, given the effect name changes for the foundation and movement would have. This served as a motivation for the open letter. Useful links are below:

  • Brand survey for individuals - Qualtrics survey. If there are options you would like to highlight outside of the three provided, it is possible to write in your own options and views at the end of the survey.
  • Brand survey for affiliates: A link should have been sent to the affiliate liaisons or affiliate contacts. If you have not received any correspondence, please contact Essie Zar (ezar -at- wikimedia.org) of WMF.

Additionally, it has been announced that there will be a WMF board meeting scheduled in July to discuss the branding issue, so it is important to express your views now.

Thanks - Fuzheado | Talk 04:03, 2 July 2020 (UTC)

Thanks for posting this here Fuzheado. After the initial mention of a July board meeting, I haven't seen it mentioned anywhere else; is this the only thing on the agenda? The list of Board meetings on Meta (and agendas) looks a bit out of date. – SJ + 05:16, 2 July 2020 (UTC)