Yuhao Zhang: 👋 Excited to share our latest work "Contrastive Learning of Medical Visual Representations from Paired Images and Text".
We propose a contrastive framework for learning visual representations of medical images from paired textual data.
👇 (1/7) https://t.co/tuyA12iwoY
10 replies, 347 likes
Curt Langlotz: Hot off the press: We have developed a self-supervised learning method that is much better for pre-training than ImageNet--reduces labeling needs by an order of magnitude for medical imaging applications:
4 replies, 183 likes
Peng Qi: Can you teach ConvNets to better find anomalies in medical images via "reading" the radiology report they are associated with?
Yuhao Zhang's (@yuhaozhangx) new work shows how unsupervised learning on data that's already routinely collected in hospitals is surprisingly effective!
0 replies, 39 likes
Stanford NLP Group: Current medical image understanding suffers from weakness of vision-only pretraining or smallness of expert-labeled datasets. But our ConVIRT method exploits the text reports doctors already produce. By @yuhaozhangx @hjian42 Miura @chrmanning @curtlanglotz https://arxiv.org/abs/2010.00747 https://t.co/WpIE5DGJJl
0 replies, 32 likes
Yuhao Zhang: 🚀 Joint work with great collaborators @hjian42, Yasuhide Miura, @chrmanning & @curtlanglotz. Work at @stanfordnlp & @StanfordAIMI.
ConVIRT = Contrasive VIsual Representation learning from Text
📖 Details in paper: https://arxiv.org/abs/2010.00747
Code is coming! (7/7)
1 replies, 5 likes
This is a research for representation learning of a pair of medical images and text data by contrastive learning, which is commonly used in medical routine work. It is more useful than ImageNet-trained models and greatly improves on image retrievals.
0 replies, 2 likes
ワクワクさん: EBM is all you need!
0 replies, 1 likes
Hang Jiang: Take a look at our new work @stanfordnlp!
0 replies, 1 likes
Found on Oct 05 2020 at https://arxiv.org/pdf/2010.00747.pdf