We have hosted the application automated interpretability in order to run this application in our online workstations with Wine or directly.
Quick description about automated interpretability:
The automated-interpretability repository implements tools and pipelines for automatically generating, simulating, and scoring explanations of neuron (or latent feature) behavior in neural networks. Instead of relying purely on manual, ad hoc interpretability probing, this repo aims to scale interpretability by using algorithmic methods that produce candidate explanations and assess their quality. It includes a “neuron explainer” component that, given a target neuron or latent feature, proposes natural language explanations or heuristics (e.g. “this neuron activates when the input has property X”) and then simulates activation behavior across example inputs to test whether the explanation holds. The project also contains a “neuron viewer” web component for browsing neurons, explanations, and activation patterns, making it more interactive and exploratory.Features:
- A neuron explainer module that proposes natural language or rule-based explanations for neuron/latent feature behavior
- Simulation / scoring of explanations by comparing predicted activations vs true activations across inputs
- A neuron viewer UI to browse neurons, see activations, and inspect explanations
- Demo notebooks illustrating how explanations are generated and evaluated (e.g. explain_puzzles.ipynb)
- Infrastructure for activation capture and analysis (e.g. modules like activations.py)
- Ranking / scoring heuristics to decide which explanations are more faithful or useful
Programming Language: Python.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.