Explainable Artificial Intelligence XAI FY17 FY18 FY19 FY20 FY21 David Gunning DARPA I2O Program Update November 2017 Approved for public release distribution unlimited The Need for Explainable AI AI System User http listverse com ©2007–2017 Listverse Ltd Transportation https www re-work co © 2017 RE-WORK X LTD Finance http explainthatstuff com • We are entering a new age of AI applications • Machine learning is the core technology • Machine learning models are opaque non-intuitive and difficult for people to understand http listverse zcom ©2007–2017 Listverse Ltd http pluspng com © 2017-2018 PlusPNG http blog soliant com ©2004-2017 Soliant Health https futureoflife org ©FLI - Future of Life Institute Security Medicine Legal • • • • • • Why did you do that Why not something else When do you succeed When do you fail When can I trust you How do I correct an error Military • The current generation of AI systems offer tremendous benefits but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to users • Explainable AI will be essential if users are to understand appropriately trust and effectively manage this incoming generation of artificially intelligent partners Approved for public release distribution unlimited 2 XAI In the News The Dark Secret at the Heart of AI Will Knight Inside DARPA’s Push to Make Artificial Intelligence Explain Itself Sara Castellanos and Steven Norton Can A I Be Taught to Explain Itself Cliff Kuang November 21 2017 August 10 2017 April 11 2017 Intelligent Machines Are Asked to Explain How Their Minds Work Richard Waters July 11 2017 Elon Musk and Mark Zuckerberg Are Arguing About AI -- But They're Both Missing the Point Artur Kiulian July 28 2017 You better explain yourself mister DARPA's mission to make an accountable AI Dan Robinson September 29 2017 Team investigates artificial intelligence machine learning in DARPA project Lisa Daigle Charles River Analytics-Led Team Gets DARPA Contract to Support Artificial Intelligence Program Ramona Adams June 13 2017 Why The Military And Corporate America Want To Make AI Explain Ghosts in the Machine Itself Christina Couch Steven Melendez October 25 2017 June 14 2017 June 22 2017 Oracle quietly DARPA’s XAI seeks researching explanations from 'Explainable AI‘ autonomous systems George Nott Geoff Fein May 5 2017 November 16 2017 Demystifying the Black Box That Is AI Ariel Bleicher How AI detectives are cracking open the black box of deep learning Paul Voosen July 6 2017 August 9 2017 Approved for public release distribution unlimited 3 Deep Learning Neural Networks Architecture and How They Work Deep Learning Neural Network Training Data Input unlabeled image Low-level features to high-level features Neurons respond to simple shapes Neurons respond to more complex structures Neurons respond to highly complex abstract concepts 1st Layer 2nd Layer nth Layer Automatic algorithm feature extraction and classification http fortune com https www xenonstack com XenonStack © © 2018 Time Inc Approved for public release distribution unlimited 4 What Are We Trying To Do Today ©Spin South West Learning Process This is a cat p 93 http explainthatstuff com ©University Of Toronto Training Data Learned Function Tomorrow Output Why did you do that Why not something else When do you succeed When do you fail When can I trust you How do I correct an error • • • • • • I understand why I understand why not I know when you’ll succeed I know when you’ll fail I know when to trust you I know why you erred User with a Task ©Spin South West This is a cat New Learning Process •It has fur whiskers and claws •It has this feature ©University Of Toronto Training Data • • • • • • Explainable Model Explanation Interface User with a Task Approved for public release distribution unlimited 5 Challenge Problems Learn a model Data Analytics Classification Learning Task Explain decisions Two trucks performing a loading activity Explainable Model Explanation Interface Use the explanation Recommend Explanation ©Getty Images ©Air Force Research Lab An analyst is looking for items of interest in massive multimedia data sets Multimedia Data Classifies items of interest in large data set Autonomy Explains why why not for recommended items Explainable Model Reinforcement Learning Task Explanation Interface Analyst decides which items to report pursue Actions Explanation ©ArduPikot org ©US Army ArduPilot SITL Simulation Learns decision policies for simulated missions Explains behavior in an after-action review An operator is directing autonomous systems to accomplish a series of missions Operator decides which future tasks to delegate Approved for public release distribution unlimited 6 Goal Performance and Explainability XAI will create a suite of machine learning techniques that • Produce more explainable models while maintaining a high level of learning performance e g prediction accuracy • Enable human users to understand appropriately trust and effectively manage the emerging generation of artificially intelligent partners Performance vs Explainability Learning Performance • Tomorrow Today Explainability notional Approved for public release distribution unlimited 7 Measuring Explanation Effectiveness Measure of Explanation Effectiveness User Satisfaction Explanation Framework • Clarity of the explanation user rating • Utility of the explanation user rating Task Recommendation Decision or Action Explainable Model Explanation Interface Mental Model Decision XAI System Explanation The system takes input from the current task and makes a recommendation decision or action The system provides an explanation to the user that justifies its recommendation decision or action The user makes a decision based on the explanation • • • • • Understanding individual decisions Understanding the overall model Strength weakness assessment ‘What will it do’ prediction ‘How do I intervene’ prediction Task Performance • Does the explanation improve the user’s decision task performance • Artificial decision tasks introduced to diagnose the user’s understanding Trust Assessment • Appropriate future use and trust Correctability Extra Credit • Identifying errors • Correcting errors • Continuous training Approved for public release distribution unlimited 8 Performance vs Explainability Explainability notional Neural Nets Deep Learning Graphical Models Bayesian Belief Nets SRL CRFs Statistical Models AOGs SVMs HBNs Ensemble Methods Random Forests MLNs Markov Models Decision Trees Approved for public release distribution unlimited Learning Performance Learning Techniques today Explainability 9 Performance vs Explainability Neural Nets Create a suite of machine learning techniques that produce more explainable models while maintaining a high level of learning performance Explainability notional Learning Techniques today Graphical Models Deep Learning Bayesian Belief Nets SRL CRFs Statistical Models AOGs SVMs HBNs Ensemble Methods Random Forests MLNs Markov Models Decision Trees Learning Performance New Approach Explainability Deep Explanation Interpretable Models Model Induction Modified deep learning techniques to learn explainable features Techniques to learn more structured interpretable causal models Techniques to infer an explainable model from any model as a black box Approved for public release distribution unlimited 10 XAI Concept and Technical Approaches http www zerohedge com ©2009-2017 ZeroHedge com ABC Media LTD © 2012 Lost Tribe Media Inc © Toronto Star Newspapers Ltd 1996–2017 © Associated Newspapers Ltd © 2017 Hürriyet Daily News © 2017 Green Car Reports © 2017 POLITI TV © 2017 Business Insider Inc © UNHCR 2001-2017 © 2017 Route 66 News New Learning Process Explainable Explanation Model Interface Training Data UC Berkeley Deep Learning Reflexive and Rational Charles River Analytics Causal Modeling Narrative Generation UCLA Pattern Theory 3-Level Explanation Oregon State Adaptive Programs Acceptance Testing PARC Cognitive Modeling Interactive Training CMU Explainable RL XRL XRL Interaction SRI International Deep Learning Show and Tell Explanations Raytheon BBN Deep Learning Argumentation and Pedagogy Probabilistic Logic Decision Diagrams Mimic Learning Interactive Visualization UT Dallas Texas A M Rutgers IHMC Psychological Model of Explanation Approved for Public Release Distribution Unlimited Model Induction Bayesian Teaching Approved for public release distribution unlimited 11 Approaches to Deep Explanation Berkeley SRI Raytheon BBN OSU CRA PARC Attention Mechanisms Modular Networks Feature Identification Learn to Explain CNN RNN This is a Downy Woodpecker because it is a black and white bird with a red spot on its crown Approved for public release distribution unlimited 12 Network Dissection Quantifying Interpretability of Deep Representations MIT Audit trail for a particular output unit the drawing shows the most strongly activated path Interpretation of several units in pool5 of AlexNet trained for place recognition Approved for public release distribution unlimited 13 Causal Model Induction CRA Causal Model Induction Experiment with the learned model as a grey box to learn an explainable causal probabilistic programming model Approved for public release distribution unlimited 14 Explanation by Selection of Teaching Examples Rutgers TRAINING DATA brow lowered nostrils flared mouth raised chin pushed out up lips thinned pushed out This face is Angry because it is similar to these examples cheekbones raised EXPLAINABLE CLASSIFICATION MODEL and dissimilar to these examples BAYESIAN TEACHING for optimal selection of examples for machine explanation Approved for public release distribution unlimited 15 Autonomy PARC OSU Common Ground Learning and Explanation COGLE An interactive sensemaking system to explain the learned performance capabilities of a UAS flying in an ArduPilot simulation testbed Explanation-Informed Acceptance Testing of Deep Adaptive Programs xACT Tools for explaining deep adaptive programs and discovering best principles for designing explanation user interfaces xFSM Deep Adaptive Program Decision Net Explanation Learner Common Ground Builder • • • Explain Train Evaluate Saliency Visualizer xNN Interactive Naming Interface Visual Words Annotation Aware Reinforcement Learning Game Engine Robotics Curriculum Approved for public release distribution unlimited 16 Four Modes of Explanation Raytheon BBN Analytic didactic statements Visualizations in natural language that describe the elements and context that support a choice that directly highlight portions of the raw data that support a choice and allow viewers to form their own perceptual understanding Explanation Modes Cases Rejections of alternative choices that invoke specific examples or stories that support the choice or “common misconceptions” in pedagogy that argue against less preferred answers based on analytics cases and data Approved for public release distribution unlimited 17 XAI Program Structure Data Analytics Multimedia Data Autonomy ArduPilot SITL Simulation TA1 Explainable Learners Deep Learning Teams Teams that provide prototype systems with both components • Explainable Model • Explanation Interface TA2 Psychological Model of Explanation Interpretable Model Teams Model Induction Teams Evaluator Naval Research Laboratory • Psychological Theory of Explanation • Computational Model Consulting Evaluation Framework Learning Performance Challenge Problem Areas Explanation Effectiveness Explanation Measures • User Satisfaction • Mental Model • Task Performance • Trust Assessment • Correctability • TA1 Explainable Learners • • Multiple TA1 teams will develop prototype explainable learning systems that include both an explainable model and an explanation interface TA2 Psychological Model of Explanation • At least one TA2 team will summarize current psychological theories of explanation and develop a computational model of explanation from those theories Approved for public release distribution unlimited 18 Challenge Problem Candidates Analytics Autonomy Visual Question Answering Strategy Games MovieQA CLEVR Starcraft2 Activity Recognition ActivityNet ELF-MiniRTS Vehicle Control ArduPilot Approved for public release distribution unlimited Driving Simulator 19 Psychological Model of Explanation IHMC Model of the Explanation Process and Possible Metrics System receives User may initially “Goodness” Criteria Explanation revises User’s Mental Model enables XAI Process XAI Metrics Better Performance is assessed by is assessed by is assessed by Test of Satisfaction Test of Comprehension Test of Performance can engender Trust or Mistrust gives way to Appropriate Trust involves enables Approved for public release distribution unlimited Appropriate Use 20 Schedule and Milestones 2017 APR MAY JUN JUL AUG SEP 2019 2018 OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL 2020 AUG Develop Demonstrate Explainable Models against proposed problems TA 1 Summarize Current Psychological Theories of Explanation TA 2 Meetings Prep for Eval 1 Define Evaluation Framework OCT NOV DEC Eval Analyze 1 Results Eval 1 Prep for Eval 2 Refine Test Explainable Learners against common problems Eval Analyze 2 Results Eval 2 Develop Computational Model of Explanation KickOff Progress Report Tech Demos May 9-11 Nov 6-8 May 7-9 JAN FEB MAR APR MAY JUN JUL 2021 AUG SEP OCT NOV DEC JAN FEB MAR APR MAY PHASE 2 Comparative Evaluations PHASE 1 Technology Demonstrations Evaluator SEP Prep for Eval 3 Refine Test Explainable Learners against common problems Refine Test Computational Model Eval 1 Results Eval Analyze Results 3 Accept Toolkits Eval 3 Deliver Software Toolkits Deliver Computational Model Eval 2 Results Final • Technical Area 1 Explainable Learners Milestones • • • • Demonstrate the explainable learners against problems proposed by the developers Phase 1 Demonstrate the explainable learners against common problems Phase 2 Deliver software libraries and toolkits at the end of Phase 2 Technical Area 2 Psychology of Explanation Milestones • • • • Deliver Deliver Deliver Deliver an interim report on psychological theories after 6 months during Phase 1 a final report on psychological theories after 12 months during Phase 1 a computational model of explanation after 24 months during Phase 2 the computational model software at the end of Phase 2 Approved for public release distribution unlimited 21 XAI Developers TA1 Analytics Autonomy Both CP Performer Explainable Model Explanation Interface UC Berkeley Deep Learning Reflexive and Rational Charles River Causal Modeling Narrative Generation UCLA Pattern Theory 3-level Explanation Oregon State Adaptive Programs Acceptance Testing PARC Cognitive Modeling Interactive Training CMU Explainable RL XRL XRL Interaction SRI International Deep Learning Show and Tell Explanation Raytheon BBN Deep Learning Argumentation and Pedagogy UT Dallas Probabilistic Logic Decision Diagrams Texas A M Mimic Learning Interactive Visualization Rutgers Model Induction Bayesian Teaching Approved for public release distribution unlimited 22 Berkeley BU U Amsterdam Kitware Deeply Explainable Artificial Intelligence Explainable Model Explanation Interface Challenge Problem Deep Learning Reflexive Rational Autonomy • Explain implicit latent nodes by training additional DL models • Explain explicit nodes thru Neural Module Networks NMNs • Reflexive explanations that arise directly from the model • Rational explanations that come from reasoning about user’s beliefs • ArduPilot and OpenAI Gym Simulations Data Analytics • Visual QA and Multimedia Event QA • PI Trevor Darrell Berkeley • • • • Pieter Abbeel Berkeley Tom Griffiths Berkeley Kate Saenko BU Zeynep Akata U Amsterdam • Dan Klein Berkeley • John Canny Berkeley • Anca Dragan Berkeley Approved for public release distribution unlimited • Anthony Hoogs Kitware 23 CRA U Mass Brown CAMEL Causal Models to Explain Learning Explainable Model Explanation Interface Challenge Problem Model Induction Causal Models Narrative Generation Autonomy • Interactive visualization based on the generation of temporal spatial narratives from the causal probabilistic models • Experiment with the learned model as a grey box to learn an explainable causal probabilistic programming model • Minecraft Starcraft Data Analytics • Pedestrian Detection INRIA Activity Recognition ActivityNet • PI Brian Ruttenberg CRA • Avi Pfeffer CRA • David Jensen U Mass • Michael Littman Brown • • • • James Niehaus CRA Emilie Roth Roth Cognitive Engineering Joe Gorman CRA James Tittle CRA Approved for public release distribution unlimited 24 UCLA OSU Michigan State Learning and Communicating Explainable Representations for Analytics and Autonomy Explainable Model Explanation Interface Challenge Problem Pattern Theory 3-Level Explanation • Integrated representation across an entropy spectrum • Deep Neural Nets • Stochastic And-OrGraphs AOG • Predicate Calculus • Integrate 3 levels of explanation • Concept compositions • Causal and counterfactual reasoning • Utility explanations Autonomy • Humanoid robot behavior and VR simulation platform Data Analytics • Understanding complex multimedia events • PI Song-Chun Zhu UCLA • Ying Nian Wu UCLA • Sinisa Todorovic OSU • Joyce Chai Michigan State Approved for public release distribution unlimited 25 OSU xACT Explanation-Informed Acceptance Testing of Deep Adaptive Programs Explainable Model Explanation Interface Challenge Problem Adaptive Programs • Explainable Deep Adaptive Programs xDAPs – a new combination of Adaptive Programs Deep Learning and explainability Acceptance Testing • Provides a visual NL explanation interface for acceptance testing by test pilots based on Information Foraging Theory Autonomy • Real-Time Strategy Games based on custom designed game engine designed to support explanation • Possible use of Starcraft • PI Alan Fern OSU • • • • Tom Dietterich OSU Fuxin Li OSU Prasad Tadepalli OSU Weng-Keen Wong OSU • Margaret Burnett OSU • Martin Erwig OSU • Liang Huang OSU Approved for public release distribution unlimited 26 PARC CMU U Edinburgh U Mich West Point COGLE Common Ground Learning and Explanation Explainable Model Explanation Interface Challenge Problem Cognitive Model Interactive Training Autonomy • Interactive visualization of states actions policies values • Includes a module for test pilots to refine and train the system • ArduPilot simulation environment • Value of Explanation VoE framework for measuring explanation effectiveness • 3-layer architecture • Learning Layer DNNs • Cognitive Layer ACT-R Cog Model • Explanation Layer HCI • PI Mark Stefik PARC • Honglak Lee U Mich • Subramanian Ramamoorthy U Edinburgh • Christian Lebiere CMU • John Anderson CMU • Robert Thomson USMA Approved for public release distribution unlimited • Michael Youngblood PARC 27 CMU Stanford XRL Explainable Reinforcement Learning for AI Autonomy Explainable Model Explanation Interface Challenge Problem XRL Models XRL Interaction Autonomy • Create a new scientific discipline for Explainable Reinforcement Learning with work on new algorithms and representations • Interactive explanations of dynamic systems • Human-machine interaction to improve performance • Open AI Gym • Autonomy in the electrical grid • Mobile service robots • Self-improving educational software • PI Geoff Gordon CMU • Zico Kolter CMU • Pradeep Ravikumar CMU • Manuela Veloso CMU • Emma Brunskill Stanford Approved for public release distribution unlimited 28 SRI U Toronto UCSD U Guelph DARE Deep Attention-based Representations for Explanation Explainable Model Explanation Interface Challenge Problem Deep Learning Show-and-Tell Explanations Data Analytics • Multiple deep learning techniques • Attention-based mechanisms • Compositional NMNs • GANs • DNN visualization • Query evidence that explains DNN decisions • Generate natural language justifications • Visual Question Answering VQA using Visual Gnome Flickr30 • MovieQA • PIs Giedrius Burachas SRI Mohamed Amer SRI • Shalini Ghosh SRI • Avi Ziskind SRI • Michael Wessel SRI • • • • Richard R Zemel U Toronto Sanja Fidler U Toronto David Duvenaud U Toronto Graham Taylor U Guelph Approved for public release distribution unlimited • Jürgen Schulze UCSD 29 Raytheon BBN GA Tech UT Austin MIT EQUAS Explainable QUestion Answering System Explainable Model Explanation Interface Challenge Problem Deep Learning Argumentation Theory Data Analytics • Semantic labelling of DNN neurons • DNN audit trail construction • Gradient-weighted Class Activation Mapping • Comprehensive strategy based on argumentation theory • NL generation • DNN visualization • Visual Question Answering VQA beginning with images and progressing to video • PI William Ferguson Raytheon BBN • Antonio Torralba MIT • Ray Mooney UT Austin • Devi Parikh GA Tech • Dhruv Batra GA Tech Approved for public release distribution unlimited 30 UT Dallas UCLA Texas A M IIT-Delhi Tractable Probabilistic Logic Models A New Deep Explainable Representation Explainable Model Explanation Interface Challenge Problem Probabilistic Decision Diagrams Probabilistic Logic • Tractable Probabilistic Logic Models TPLMs – an important class of non-deep learning interpretable models • Enables users to explore and correct the underlying model as well as add background knowledge Data Analytics • Infer activities in multimodal data video and text • Using the Wetlab biology and TACoS cooking datasets • PI Vibhav Gogate UT Dallas • Adnan Darwiche UCLA • Eric Ragan Texas A M • Guy Van Den Broeck UCLA • Parag Singla IIT-Delhi • Nicholas Ruozzi UT Dallas Approved for public release distribution unlimited 31 Texas A M Wash State Transforming Deep Learning to Harness the Interpretability of Shallow Models An Interactive End-to-End System Explainable Model Explanation Interface Challenge Problem Mimic Learning • Develop a mimic learning framework that combines deep learning models for prediction and shallow models for explanations Interactive Visualization • Interactive visualization over multiple views using heat maps topic modeling clusters to show predictive features Data Analytics • Multiple tasks using data from Twitter Facebook ImageNet UCI NIST and Kaggle • Metrics for explanation effectiveness • PI Xia Hu Texas A M • Shuiwang Ji Wash State • Eric Ragan Texas A M Approved for public release distribution unlimited 32 Rutgers Model Explanation by Optimal Selection of Teaching Examples Explainable Model Explanation Interface Challenge Problem Model Induction Bayesian Teaching Data Analytics • Select the optimal training examples to explain model decisions based on Bayesian Teaching • Example-based explanation of • the full model • user-selected substructure • user submitted examples • • • • • Movie descriptions Image processing Caption data Movie events Human motion events • PI Patrick Shafto Rutgers • Scott Cheng-Hsin Yang Rutgers Approved for public release distribution unlimited 33 IHMC MacroCognition Michigan Tech Naturalistic Decision Making Foundations of Explainable AI Literature Review Computational Model Model Validation Naturalistic Theory Bayesian Framework Experiments • Extensive review of relevant psychological theories • Extend the theory of Naturalistic Decision Making to cover explanation • Represent reductionist mental models that humans develop as part of the explanatory process • Including mental simulation • Conduct interactive assessment and formal human experiments • Validate the model • Develop metrics of explanation effectiveness • PI Robert R Hoffman IHMC • Gary Klein • William J Clancey IHMC MacroCognition • COL Timothy M Cullen • Shane T Mueller Michigan SAASS Tech Approved for public release distribution unlimited • Jordan Litman IHMC Psychometrician • Simon Attfield Middlesex University-London • Peter Pirolli IHMC 34 Naval Research Laboratory XAI Evaluation Analytics Two trucks performing a loading activity Autonomy Evaluation Framework • Evaluation protocols • Training environment • Training data • Simulation environment • Testing environment • Subjects • Web infrastructure • Baseline systems Measurement Learning Performance Challenge Problems Tomorrow Today Explanation Effectiveness • PI David Aha NRL • Justin Karneeb Knexus • Matt Molineaux Knexus • Leslie Smith NRL • Mike Pazzani UC Riverside Approved for public release distribution unlimited 35 Approved for public release distribution unlimited 36
OCR of the Document
View the Document >>