Large Transformer-based language models can route and reshape complex information via their multi-headed attention mechanism. Although the attention never receives explicit supervision, it can exhibit understandable patterns following linguistic or positional information. To further our understanding of the inner workings of these models, we need to analyze both the learned representations and the attentions.
To support analysis for a wide variety of 🤗Transformer models, we introduce exBERT, a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process. exBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets.
The fully-featured demo shows select Transformer models with the Wizard of Oz and a subset of Wikipedia pre-annotated for the hidden representations for each model. Please let us know what you think by commenting below!
We care about your privacy, but know that your activity on the site may be monitored. For more information, check out the links below.
@inproceedings{hoover-etal-2020-exbert, title = "ex{BERT}: A Visual Analysis Tool to Explore Learned Representations in {T}ransformer Models", author = "Hoover, Benjamin and Strobelt, Hendrik and Gehrmann, Sebastian", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-demos.22", pages = "187--196",