TY - GEN
T1 - Multimodal Knowledge Graph for Deep Learning Papers and Code
AU - Kannan, Amar Viswanathan
AU - Fradkin, Dmitriy
AU - Akrotirianakis, Ioannis
AU - Kulahcioglu, Tugba
AU - Canedo, Arquimedes
AU - Roy, Aditi
AU - Yu, Shih Yuan
AU - Arnav, Malawade
AU - Al Faruque, Mohammad Abdullah
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/10/19
Y1 - 2020/10/19
N2 - Keeping up with the rapid growth of Deep Learning (DL) research is a daunting task. While existing scientific literature search systems provide text search capabilities and can identify similar papers, gaining an in-depth understanding of a new approach or an application is much more complicated. Many publications leverage multiple modalities to convey their findings and spread their ideas - they include pseudocode, tables, images and diagrams in addition to text, and often make publicly accessible their implementations. It is important to be able to represent and query them as well. We utilize RDF Knowledge graphs (KGs) to represent multimodal information and enable expressive querying over modalities. In our demo we present an approach for extracting KGs from different modalities, namely text, architecture images and source code. We show how graph queries can be used to get insights into different facets (modalities) of a paper, and its associated code implementation. Our innovation lies in the multimodal nature of the KG we create. While our work is of direct interest to DL researchers and practitioners, our approaches can also be leveraged in other scientific domains.
AB - Keeping up with the rapid growth of Deep Learning (DL) research is a daunting task. While existing scientific literature search systems provide text search capabilities and can identify similar papers, gaining an in-depth understanding of a new approach or an application is much more complicated. Many publications leverage multiple modalities to convey their findings and spread their ideas - they include pseudocode, tables, images and diagrams in addition to text, and often make publicly accessible their implementations. It is important to be able to represent and query them as well. We utilize RDF Knowledge graphs (KGs) to represent multimodal information and enable expressive querying over modalities. In our demo we present an approach for extracting KGs from different modalities, namely text, architecture images and source code. We show how graph queries can be used to get insights into different facets (modalities) of a paper, and its associated code implementation. Our innovation lies in the multimodal nature of the KG we create. While our work is of direct interest to DL researchers and practitioners, our approaches can also be leveraged in other scientific domains.
KW - deep learning
KW - knowledge graphs
KW - multimodal information retrieval
KW - scientific knowledge graph exploration
KW - scientific knowledge graphs
UR - https://www.scopus.com/pages/publications/85095865930
UR - https://www.scopus.com/inward/citedby.url?scp=85095865930&partnerID=8YFLogxK
U2 - 10.1145/3340531.3417439
DO - 10.1145/3340531.3417439
M3 - Conference contribution
AN - SCOPUS:85095865930
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 3417
EP - 3420
BT - CIKM 2020 - Proceedings of the 29th ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
T2 - 29th ACM International Conference on Information and Knowledge Management, CIKM 2020
Y2 - 19 October 2020 through 23 October 2020
ER -