Papers of the day   All papers

Automatically Neutralizing Subjective Bias in Text

Comments

Nov 25 2019 Stanford NLP Group

Can #NLProc reduce bias in our news and politics? Automatically Neutralizing Subjective Bias in Text Pryzant, @jurafsky, … #AAAI2020 Parallel corpus of 180k biased and neutralized sentences Models for editing subjective bias out of text #Wikipedia #npov https://arxiv.org/abs/1911.09709 https://t.co/6U6CdlwqVz
1 replies, 158 likes


Nov 25 2019 Diyi Yang

Our recent #AAAI2020 work looks at neutralizing subjective #bias in text. It also releases a parallel corpus of 180K biased/unbiased sentence pairs mined from @Wikipedia #NLProc
2 replies, 139 likes


Nov 25 2019 Julian Harris

Automatically Neutralizing Subjective Bias in Text https://arxiv.org/abs/1911.09709 #NLProc by @stanfordnlp @KyotoU_News @GeorgiaTech @jurafsky @SadaoKurohashi Pryzant, & Martinez First parallel corpus, "Wikipedia Neutrality Corpus" (WNC) of biased language: 180,000 sentence pairs https://t.co/ufepY30JxQ
1 replies, 47 likes


Mar 06 2020 HotComputerScience

Most popular computer science paper of the day: "Automatically Neutralizing Subjective Bias in Text" https://hotcomputerscience.com/paper/automatically-neutralizing-subjective-bias-in-text https://twitter.com/stanfordnlp/status/1199017784115519488
1 replies, 38 likes


Feb 11 2020 Richard Diehl

. @Diyi_Yang @jurafsky @RPryzant Stanford NLP lab shows how we can use deep language models to automatically detect and correct bias in media at #AAAI20 Paper: https://arxiv.org/abs/1911.09709 Dataset: http://bit.ly/bias-corpus https://t.co/IMp25NlWnY
0 replies, 31 likes


Nov 25 2019 WikiResearch

"Automatically Neutralizing Subjective Bias in Text" A method to automatically bring inappropriately subjective text into a neutral point of view, and a corpus of 180k biased/unbiased sentence pairs from Wikipedia. (Pryzant et al, 2019) https://arxiv.org/pdf/1911.09709.pdf https://t.co/Fuwo8uCwe2
0 replies, 14 likes


Feb 28 2020 WikiResearch

"Towards Detection of Subjective Bias using Contextualized Word Embeddings" detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus (WNC). (Dadu et al, 2020) paper: https://arxiv.org/pdf/2002.06644.pdf WNC: https://arxiv.org/abs/1911.09709` https://t.co/hBQhdfZ6Pd
0 replies, 10 likes


Feb 28 2020 Kartikey Pant

Proud to share that our work on detection of subjective bias in Wikipedia using BERT-based models has been featured by @WikiResearch. This work has also been accepted as a poster at @TheWebConf 2020 to be held in Taiwan, Taipei. @DaduTanvi #NLProc #Wikipedia
0 replies, 6 likes


Nov 25 2019 Scholarcy: Read less. Learn more.

BERT+LSTM to automatically identify and neutralize subjective bias in text: https://arxiv.org/pdf/1911.09709. Data and code: https://github.com/rpryzant/neutralizing-bias. And here's our own #AI generated plain language summary. #NLProc https://t.co/4fSeDJyzcC
0 replies, 4 likes


Nov 25 2019 arxiv

Automatically Neutralizing Subjective Bias in Text. http://arxiv.org/abs/1911.09709 https://t.co/CAaFXBVaPr
0 replies, 3 likes


Feb 10 2020 IC@GT ✈️ AAAI 2020 in NYC

IC's @Diyi_Yang was a co-author on this work at #AAAI2020. The paper is "Automatically Neutralizing Subjective Bias in Text," & we recently ran a podcast w/ Diyi & IC Chair @robotsmarts on the subject. Check it out: Podcast: https://apple.co/2Sx61Wn Paper: https://arxiv.org/abs/1911.09709 https://t.co/Zmz67FCWHO
1 replies, 3 likes


Dec 08 2019 arXiv CS-CL

Automatically Neutralizing Subjective Bias in Text http://arxiv.org/abs/1911.09709
0 replies, 2 likes


Feb 10 2020 Ingrid Mason

Throwing this into #digitalhumanities #AI4LAM Uncomfortable with the language of purported “neutrality” and “debias” as a kind of text washing using NLP. Do see educational value for this, but no text is neutral, intent can be very well disguised without inflammatory language.
2 replies, 2 likes


Feb 11 2020 Mike Jones

This (via @1n9r1d) is so problematic I don't know where to start. But I'll start here: leaving aside the idea that bias in text can be 'neutralized' (automatically or manually) even the assumption that bias is *visible* in the words of the text is flawed.
2 replies, 1 likes


Content