Papers of the day   All papers

Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation

Comments

Sep 30 2019 Arvind Narayanan

A paper by Beijing researchers presents a new machine learning technique whose main uses seem to be trolling and disinformation. It's been accepted for publication at EMLNP, one of the top 3 venues for Natural Language Processing research. Cool Cool Cool https://arxiv.org/pdf/1909.11974v1.pdf https://t.co/Y8U5AjENrh
21 replies, 742 likes


Sep 28 2019 Sam Bowman

Eegh. This might be the most net-evil research topic I’ve seen for an #nlp paper.
6 replies, 192 likes


Oct 01 2019 toomas hendrik ilves

Just great. You no longer need to hire the IRA's troll army in St. Petersburg. You can just use machine learning to crank out trolling commentary. h/t to coder-whiz mathematician @danbogdanov. Attn: @conspirator0 @benimmo @selectedwisdom @katestarbird
7 replies, 170 likes


Sep 28 2019 Matthew Kenney

Just read this on the arXiv Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation http://arxiv.org/abs/1909.11974v1
8 replies, 105 likes


Oct 01 2019 Bill Bishop

One of the authors works for Microsoft in beijing
3 replies, 80 likes


Sep 30 2019 Sam Gregory

Reminder that automated text generation also big threat to public sphere alongside the more obviously sensational threat of #deepfakes (paging @jeremyphoward) and also about need for more focused/consistent ethics of publication and discussion *un*intended uses in this area #ML
2 replies, 67 likes


Oct 03 2019 atomicthumbs

beginning to think that a sizable portion of "machine learning researchers" are shit heads
5 replies, 35 likes


Oct 02 2019 /Fay-lee-nuh/

Not everything that could be build, should be build, part 452936
0 replies, 29 likes


Sep 29 2019 alex hayes

please don't build things like this
3 replies, 28 likes


Oct 01 2019 Elsa B. Kania

Well, this is interesting and quite concerning. I’d note that the authors are from Beihang University, which is closely linked to military research, and Microsoft...
2 replies, 25 likes


Oct 02 2019 hardmaru

Thread. https://twitter.com/random_walker/status/1178663474475483137
3 replies, 24 likes


Nov 07 2019 Matthias Gallé

Just attended this talk at #emnlp2019. Pushed on the issues raised in this thread, the author said "there are no ethical issues in this work"
1 replies, 18 likes


Oct 02 2019 Nick Diakopoulos

"A Deep Architecture for Automatic News Comment Generation" -- I wonder if any news commenting moderation systems (@coralproject perhaps?) are considering how to screen for this. It's coming ... it's already here.
2 replies, 16 likes


Oct 06 2019 Hannes Grassegger

This tool could blow up Social Media. China's about to automate online commenting, using AI. Think about it for a second https://arxiv.org/pdf/1909.11974v1.pdf?utm_source=Disinformation+Matters&utm_campaign=9dc0f1df65-EMAIL_CAMPAIGN_2019_10_06_05_33&utm_medium=email&utm_term=0_610841f0e2-9dc0f1df65-223635137 https://t.co/wPRj1uXXRk
1 replies, 11 likes


Oct 02 2019 Brandon Roberts

Shocking breaking news: ML researchers still completely ignoring the ethical implications of their work. /s
1 replies, 11 likes


Sep 30 2019 emmi

Uncritical works on this are bad. Research exposing these near-term inevitabilities as attack vectors is good.
1 replies, 9 likes


Oct 02 2019 Max Heinemeyer

https://arxiv.org/pdf/1909.11974v1.pdf Using AI to create fake comments for news articles. What could possibly go wrong?
1 replies, 9 likes


Oct 01 2019 Marietje Schaake

Thread on how AI can be used for comment-generation. Deep-fakes through language ↘️
1 replies, 8 likes


Oct 01 2019 Peter W. Singer

Great example of how future of #likewar is well beyond deepfakes...
0 replies, 7 likes


Oct 01 2019 Jan Jekielek

Whoa! An auto-trolling comment algorithm. How convenient. “Unfortunately, EMNLP does not seem to require authors to address or acknowledge the ethical implications of their work.”
0 replies, 6 likes


Sep 30 2019 Daniel Leufer

Something like this only has terrible applications for society. There’s no sense in which this is ‘just research’. Any responsibile journal or conference wouldn’t allow it in
0 replies, 5 likes


Sep 30 2019 Alvin Grissom II

I won't be at EMNLP this year, but I hope that those in attendance will ask the obvious questions.
0 replies, 5 likes


Nov 07 2019 Kevin Gallagher

In this alarming paper, Chinese researchers present an "A Deep Architecture for Automatic News Comment Generation" https://arxiv.org/pdf/1909.11974v1.pdf https://t.co/5cWiN8C9Ms
0 replies, 4 likes


Sep 30 2019 Harry Brignull

thisisfine.jpg
0 replies, 3 likes


Oct 02 2019 Maulik Kamdar

Another one of those "just because you can, does not mean you should" stories.
0 replies, 3 likes


Sep 30 2019 Michael Caster

Amid reports of #Beijing-backed social media disinformation campaigns in #HongKong and elsewhere, Beijing researchers present new machine learning algorithm that promises to effectively "outperform existing methods" for trolling and disinformation. #China #ArtificialIntelligence
0 replies, 3 likes


Oct 06 2019 Raju Narisetti

what are the ethical responsibilities for a publication on spotlighting research that could end up being used for not so good social outcomes? a thread
0 replies, 2 likes


Nov 07 2019 Nirant

Absolutely love how everyone is skirting around the observation that automation fake news generation paper: 1. All the authors are Chinese 2. Microsoft China sponsored this paper We see what you did there academic twitter PS: Will delete this if I feel threatened I ❤️ China
1 replies, 2 likes


Oct 01 2019 Dinesh Nair

Research by Beijing based researchers, two from Microsoft, on a machine learning technique to generate comments on news articles. Bots replying to articles. Example of AI taking over cybertrooper jobs. Full paper to be presented at EMLNP in November - https://arxiv.org/pdf/1909.11974v1.pdf https://t.co/7H7eUNyOV8
0 replies, 2 likes


Oct 01 2019 Internet Ethics

Thread #ethics #MachineLearning #research #tech
0 replies, 2 likes


Oct 02 2019 Gen-X Cynic

What could possibly go wrong?
0 replies, 2 likes


Oct 02 2019 Dr. Courtney Radsch

Well, this is concerning. As a scholar and a press freedom advocate, this concerns me. We need to.include ethical considerations in research, especially when there's no IRB process
0 replies, 1 likes


Oct 01 2019 Michael Lucy

new and more realistic comment bots coming soon
1 replies, 1 likes


Oct 02 2019 Ryan Butner

One of those papers that, in light of modern info warfare tactics, really makes you question who else besides governments looking to poison political dialogue online could remotely benefit from this.
0 replies, 1 likes


Oct 02 2019 Niels Wouters 🤖

This is exactly why all computer science / security / technology / interaction conferences should demand authors to critically reflect upon and describe the (unintended) ethical impact of their work...
0 replies, 1 likes


Oct 03 2019 Ethan Fecht

Here's something weird: Chinese researchers publishing their findings on how to accomplish "automatic news comment generation" because it's "beneficial for real applications" https://arxiv.org/pdf/1909.11974v1.pdf
0 replies, 1 likes


Sep 30 2019 Cristina Alaimo

Okay this is not cool anymore #academia should really act as a community and discourage or even sanction research on tools that can be directly used for ambiguous practices ⚠️
0 replies, 1 likes


Oct 01 2019 Alex John London

Interesting thread—Microsoft sponsored conference includes a paper that generates comments on news stories. If the main uses are trolling and disinformation then what is it doing at this meeting? #aiEthics? #ethicalAI
0 replies, 1 likes


Oct 04 2019 Frédéric Marty

"Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation" https://arxiv.org/pdf/1909.11974v1.pdf
0 replies, 1 likes


Oct 01 2019 Rozncrantz

OMG. Did I just hear the sound of a thousand Wumao hearts breaking...? #antiELAB #HongKongProtests
0 replies, 1 likes


Sep 29 2019 Joshua Raclaw

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
1 replies, 0 likes


Sep 30 2019 Michael Piotrowski

This is a particularly egregious case, but unfortunately it seems that with the recent explosive growth of interest in #nlproc (just look at the submission numbers of the top conferences!) this attitude is now widespread in #nlproc.
1 replies, 0 likes


Content