# Understanding LM Output via Token Salience
Google demonstrated that Gradient L2 salience leads to consistently better token salience explanations than other explainability models in LIT visualizations (the Language Interpretability Tool).
https://ai.googleblog.com/2022/12/will-you-find-these-shortcuts.html