We measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature.Ī feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction.Ī feature is “unimportant” if shuffling its values leaves the model error unchanged, because in this case the model ignored the feature for the prediction. 10.5.4 Disadvantages of Identifying Influential Instances.10.5.3 Advantages of Identifying Influential Instances.10.3.5 Bonus: Other Concept-based Approaches.10.3.1 TCAV: Testing with Concept Activation Vectors.10.2.1 Vanilla Gradient (Saliency Maps).9.6 SHAP (SHapley Additive exPlanations).9.3.1 Generating Counterfactual Explanations.9.1 Individual Conditional Expectation (ICE).8.5.2 Should I Compute Importance on Training or Test Data?.8.4.5 Generalized Functional ANOVA for Dependent Features.8.4.3 How not to Compute the Components II.8.4.1 How not to Compute the Components I.8.2 Accumulated Local Effects (ALE) Plot.5.5.1 Learn Rules from a Single Feature (OneR).5.2.1 What is Wrong with Linear Regression for Classification?.5.1.6 Do Linear Models Create Good Explanations?.4.3 Risk Factors for Cervical Cancer (Classification).4.2 YouTube Spam Comments (Text Classification).3.3.5 Local Interpretability for a Group of Predictions.3.3.4 Local Interpretability for a Single Prediction.3.3.3 Global Model Interpretability on a Modular Level.3.3.2 Global, Holistic Model Interpretability.3.2 Taxonomy of Interpretability Methods.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |