Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create an "acknowledgement of harmful content" type page #2595

Open
sarayourfriend opened this issue Jul 7, 2023 · 2 comments
Open

Create an "acknowledgement of harmful content" type page #2595

sarayourfriend opened this issue Jul 7, 2023 · 2 comments
Labels
🕹 aspect: interface Concerns end-users' experience with the software design: needed Needs a designer's touch before implementation can begin 🌟 goal: addition Addition of new feature 🟨 priority: medium Not blocking but should be addressed soon 🧱 stack: frontend Related to the Nuxt frontend

Comments

@sarayourfriend
Copy link
Contributor

Problem

This is somewhat related to #2550 and could potentially be merged into it

Also related to #2593.

Openverse aggregates works from sources that include catalogue materials with harmful (racist, violent, colonial, etc) content. For example, this result from Boston Public Library includes a violent racial slur both in the title and the text in the image (CW: racial slurs, racism). This work is sourced from BPL but hosted on Flickr. The Flickr page does not include any information about the historical context of the work either, so this example would not be covered by #2593, though the image would be marked sensitive based on the sensitive text detection work.

However, none of that would necessarily make it clear why Openverse displays such content.

Description

In addition to #2550, either as a section of that page or a separate page altogether, make explicit the fact that Openverse aggregates works from sources that hold works in their collections that include harmful content. I don't think we need to use the term "sensitive" in this particular case, because we're discussing specifically works that have been used to harm specific communities. This is different from "sensitive" in a sense because it's universally true that racial slurs have been and are used to harm people, whereas "sensitivity" is more about subject tolerance of exposure to such things and also includes broader things like nudity which aren't necessarily harmful. The distinction is important.

The link to this page should be available in the pages menu and either:

  1. On every single results page
  2. On single result pages with some subset of our "sensitive terms list"
  3. On every single results page for particular providers

My preference is the first one with the third one being a runner-up if we need to exclude it from some providers for some reason. I don't think the second one is viable because it would require us to decide which of the terms in our sensitive terms list are "harmful" vs "sensitive".

This would need to be available alongside provider-specific links to similar acknowledgements/explanations as described in #2593.

@sarayourfriend sarayourfriend added 🚦 status: awaiting triage Has not been triaged & therefore, not ready for work 🌟 goal: addition Addition of new feature 🕹 aspect: interface Concerns end-users' experience with the software 🧱 stack: frontend Related to the Nuxt frontend labels Jul 7, 2023
@obulat obulat added 🟨 priority: medium Not blocking but should be addressed soon and removed 🚦 status: awaiting triage Has not been triaged & therefore, not ready for work labels Jul 14, 2023
@dhruvkb dhruvkb added the design: needed Needs a designer's touch before implementation can begin label Aug 10, 2023
@AetherUnbound
Copy link
Contributor

@sarayourfriend do you feel this issue was addressed in the content from #3074 and can be closed?

@sarayourfriend
Copy link
Contributor Author

No, the sensitive content explanation page does not cover the full extent of what this issue is about. The goal behind this issue and the page/feature it proposes is to directly acknowledge that there is harmful content, which goes beyond "sensitive" content. Harmful content includes things like historical racialised, violent descriptions and depictions of people present in GLAM institution catalogues, which we include in Openverse (it's relatively easy to find these if you search for common terms and include sensitive results).

The idea here is that we should explain why these results exist at all, even beyond the "non-censorship" perspective present in the sensitive content page, which explains why we don't necessarily remove things (while reserving the right to remove things at the project's discretion). But certain things, like items that are undoubtedly violent and harmful, as part of the historical record contained in GLAM institution catalogues, we wouldn't remove (at least I don't think we would or should). That's a separate issue from sensitive content, from my perspective.

It's also critical, as I mentioned in the issue description, to include these on results where the GLAM institution themselves have acknowledged the harmful content. From a reputational perspective, for Openverse and the provider, it's important to include these acknowledgements. Openverse's presentation of those results would not be in line with the way those providers themselves present the work, leaving out significant interpretive aids with context necessary to understand the work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🕹 aspect: interface Concerns end-users' experience with the software design: needed Needs a designer's touch before implementation can begin 🌟 goal: addition Addition of new feature 🟨 priority: medium Not blocking but should be addressed soon 🧱 stack: frontend Related to the Nuxt frontend
4 participants