Related Papers for Analytic Transparency, Provenance, and Explanation

Selected papers and abstracts specifically relevant to the analytic provenance project. See Eric Ragan's Google Scholar Profile for more publications from the Indie Lab.

Project Lead: Eric Ragan, PhD; Indie Lab at University of Florida


Selected Papers: Provenance

  • Block, J., Esmaeili, S., Ragan, E., Goodall, J., and Richardson, G. (2022). The Influence of Visual Provenance Representations on Strategies in a Collaborative Hand-off Data Analysis Scenario. IEEE Transactions on Visualization and Computer Graphics (TVCG). (pdf)

  • Park, D., Suhail, M., Zheng, M., Dunne, C., Ragan, E., and Elmqvist, N. (2021). StoryFacets: A Design Study on Storytelling with Visualizations for Collaborative Data Analysis. Information Visualization. p. 1-12. August 2021. doi: 10.1177/14738716211032653 (link | pdf)

  • Chung, H., Esakia, A., and Ragan, E. (2020). The Impact of Utilizing a Large High-Resolution Display on the Analytical Process for Visual Histories. International Journal of Data Analytics (IJDA). 1(2), pp. 67-88. doi: 10.4018/IJDA.2020070106 (link | pdf)

  • Block, J. and Ragan, E. (2020). Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems. IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV 2020). Workshop at IEEE VIS 2020 (link | pdf)

  • Bolte, F., Nourani, M., Ragan, E., and Bruckner, S. (2020). SplitStreams: A Visual Metaphor for Evolving Hierarchies. IEEE Transactions on Visualization and Computer Graphics (TVCG). pp. 1-13. doi: 10.1109/TVCG.2020.2973564 (link | pdf)

  • Madanagopal, K., Ragan, E., and Benjamin, P. (2019). Analytic Provenance in Practice: The Role of Provenance in Real-World Visualization and Data Analysis Environments. IEEE Computer Graphics and Applications, 39(6), 30-45. doi: 10.1109/MCG.2019.2933419 (link | pdf)

  • Peña, A., Nirjhar, E. H., Pachuilo, A., Chaspari, T., and Ragan, E. (2019). Detecting Changes in User Behavior to Understand Interaction Provenance during Visual Data Analysis. User Interactions for Building Knowledge (UIBK) Workshop, ACM Intelligent User Interfaces (IUI) Workshops 2019. (link | pdf)

  • Mohseni, S., Peña, A., and Ragan, E. (2017). ProvThreads: Analytic Provenance Visualization and Segmentation. Extended poster abstract. In Proceedings of IEEE VIS 2017. (pdf)

  • Ragan, E., Endert, A., Sanyal, J., and Chen, J. (2016). Characterizing Provenance in Visualization and Data Analysis: An Organizational Framework of Provenance Types and Purposes. IEEE Transactions on Visualization and Computer Graphics (TVCG), 22(1), 31-40. doi: 10.1109/TVCG.2015.2467551 (link | pdf)

  • Pachuilo, A., Ragan, E., and Goodall, J. (2016). Leveraging Interaction History for Intelligent Configuration of Multiple Coordinated Views in Visualization Tools. Logging Interactive Visualizations and Visualizing Interaction Logs (LIVVIL) Workshop at IEEE VIS 2016. (link | pdf)

  • Linder, R., Peña, A., Jayarathna, S., and Ragan, E. (2016). Results and Challenges in Visualizing Analytic Provenance of Text Analysis Tasks Using Interaction Logs. Logging Interactive Visualizations and Visualizing Interaction Logs (LIVVIL) Workshop at IEEE VIS 2016. (link | pdf)

  • Ragan, E., Goodall, J., and Tung, A. (2015). Evaluating How Level of Detail of Visual History Affects Process Memory. In Proceedings of CHI Conference on Human Factors in Computing Systems (ACM CHI 2015). 2711-2720. ACM. doi: 10.1145/2702123.2702376 (link | pdf)

  • Ragan, E. and Goodall, J. (2014). Evaluation Methodology for Comparing Memory and Communication of Analytic Processes in Visual Analytics. Beyond Time And Errors: Novel Evaluation Methods For Visualization (BELIV Workshop). Workshop at IEEE VIS 2014. ACM. doi: 10.1145/2669557.2669563. (link | pdf)

Selected Papers: Algorithmic Transparency and Explanation

  • Nourani, M., Roy, C., Honeycutt, D., Ragan, E., and Gogate, V. (2022). DETOXER: Visual Debugging Tool with Multi-Scope Explanations for Temporal Multi-Label Classification. IEEE Computer Graphics and Applications. p. 1-11. DOI: 10.1109/MCG.2022.3201465 (link | pdf)

  • Nourani, M., Roy, C., Block, J., Honeycutt, D., Rahman, T., Ragan, E., and Gogate, V. (2022). On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications. ACM Transactions on Interactive Intelligent Systems (TiiS). doi: 10.1145/3531066 (link | pdf)

  • Roy, C., Nourani, M., Honeycutt, D., Block, J., Rahman, T., Ragan, E., Ruozzi, N., and Gogate, V. (2021). Explainable Activity Recognition in Videos: Lessons Learned. Applied AI Letters. doi: 10.1002/ail2.59 (link | pdf)

  • Linder, R., Mohseni, S., Yang, F., Pentyala, S., Ragan, E., and Hu, X. (2021). How Level of Explanation Detail Affects Human Performance in Interpretable Intelligent Systems: A Study on Explainable Fact Checking. Applied AI Letters. p. 1-19. doi: 10.1002/ail2.49 (link | pdf)

  • Nourani, M., Roy, C., Block, J., Honeycutt, D., Rahman, T., Ragan, E., and Gogate, V. (2021). Anchoring Bias Affects Mental Models and User Reliance in Explainable AI Systems. ACM International Conference on Intelligent User Interfaces (ACM IUI). pp 1-11. doi: 10.1145/3397481.3450639. Award winner: Honorable Mention Best Paper. (link | pdf | video)

  • Mohseni, S., Block, J., and Ragan, E. (2021). Quantitative Evaluation of Machine Learning Explanations: A Human-Grounded Benchmark. ACM International Conference on Intelligent User Interfaces (ACM IUI). pp 1-10. doi: 10.1145/3397481.3450689 (pdf)

  • Mohseni, S., Yang, F., Pentyala, S., Du, M., Liu, Y., Lupfer, N., Hu, X., Ji, S., and Ragan, E. (2021). Machine Learning Explanations to Prevent Overtrust in Fake News Detection. To appear in International AAAI Conference on Web and Social Media (ICWSM). pp 1-10. (pdf)

  • Nourani, M., King, J., and Ragan, E. (2020). The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems. In AAAI Conference on Human Computation and Crowdsourcing (AAAI HCOMP). pp 1-10. (link | pdf)

  • Honeycutt, D., Nourani, M., and Ragan, E. (2020). Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy. In AAAI Conference on Human Computation and Crowdsourcing (AAAI HCOMP). pp 1-10. (link | pdf)

  • Mohseni, S., Zarei, N., and Ragan, E. (2020, accepted). A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. To appear in ACM Transactions on Interactive Intelligent Systems. pp. 1-46. (pdf)

  • Mohseni, S., Yang, F., Pentyala, S., Du, M., Liu, Y., Lupfer, N., Hu, X., Ji, S., and Ragan, E. (2020). Trust Evolution Over Time in Explainable AI for Fake News Detection. In Workshop on Human-Centered Approaches to Fair and Responsible AI. Workshop at ACM CHI 2020 (link | pdf)

  • Nourani, M., Honeycutt, D., Block, J., Roy, C., Rahman, T., Ragan, E., and Gogate, V. (2020). Investigating the Importance of First Impressions and Explainable AI with Interactive Video Analysis. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (ACM CHI 2020) pp. 1-8. doi: 10.1145/3334480.3382967 (link | pdf)

  • Kum, HC and Ragan, E. (2019). Exploring the Use of Interactive Interfaces and Feedback Mechanisms to Enhance Privacy in Data Workers through Information Accountability. Workshop on Security Information Workers (WSIW 2019). (pdf)

  • Roy, C., Nourani, M., Shanbhag, M., Kabir, S., Rahman, T., Ragan, E., Ruozzi, N. and Gogate, V. (2019). Explainable Activity Recognition in Videos using Dynamic Cutset Networks. 3rd Workshop of Tractable Probabilistic Modeling (TPM 2019).

  • Roy, C., Shanbhag, M., Rahman, T., Gogate, V., Ruozzi, N., Nourani, M., Ragan, E., and Kabir, S. (2019). Explainable Activity Recognition in Videos. Workshop on Explainable Smart Systems (ExSS), ACM Intelligent User Interfaces (IUI) Workshops 2019. (link | pdf)

  • Yang, F., Pentyala, S. K., Mohseni, S., Du, M., Yuan, H., Linder, R., Ragan, E., Ji, S., and Hu, X. (2019). XFake: Explainable Fake News Detector with Visualizations. 2019 Web Conference (WWW). Demonstrations track. (pdf)

  • Kum, HC, Ragan, E., Ilangovan, G., and Ramezani, Q., Li, Q., and Schmit, C. (2019). Enhancing Privacy through an Interactive On-demand Incremental Information Disclosure Interface: Applying Privacy-by-Design to Record Linkage. USENIX Symposium on Usable Privacy and Security (SOUPS 2019). (link | pdf)

  • Nourani, M., Kabir, S., Mohseni, S., and Ragan, E. (2019). The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. AAAI Conference on Human Computation and Crowdsourcing (AAAI HCOMP), Vol. 7, No. 1, pp. 97-105. (link | pdf)

  • Ragan, E., Kum, HC, Ilangovan, G., and Wang, H. (2018). Balancing Privacy and Information Disclosure in Interactive Record Linkage with Visual Masking. In Proceedings of ACM CHI Conference on Human Factors in Computing Systems (ACM CHI 2018). doi: 10.1145/3173574.3173900 Award winner: Honorable Mention Award (top 5% of papers). (link | pdf)

  • Goodall, J., Ragan, E., Steed, C., Reed, J., Richardson, G., Huffer, K., Bridges, R., and Laska, J. (2018). Situ: Identifying and Explaining Suspicious Behavior in Networks, IEEE Transactions on Visualization and Computer Graphics (TVCG). 25(1), 204-214. doi: 10.1109/TVCG.2018.2865029 (link | pdf)