You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pre-trained foundation models have revolutionized speech technology like many other adjacent fields. The combination of their capability and opacity has sparked interest in researchers trying to interpret the models in various ways. While interpretability in fields such as computer vision and natural language processing has made significant progress towards understanding model internals and explaining their decisions, speech technology has lagged behind despite the widespread use of complex, black-box neural models. Recent studies have begun to address this gap, marked by a growing body of literature focused on interpretability in the speech domain. This tutorial provides a structured overview of interpretability techniques, their applications, implications, and limitations when applied to speech models, aiming to help researchers and practitioners better understand, evaluate, debug, and optimize speech models while building trust in their predictions. In hands-on sessions, participants will explore how speech models encode distinct features (e.g., linguistic information) and utilize them in their inference. By the end, attendees will be equipped with the tools and knowledge to start analyzing and interpreting speech models in their own research, potentially inspiring new directions.
4
+
5
+
```{note}
6
+
We will present our tutorial about _Interpretability Techniques for Speech Models_ on **Sunday, August 17th** at this year's Interspeech conference in Rotterdam. <br> Check out the [preliminary programme](#interspeech-programme) below, and sign up through the [Interspeech registration form](https://www.interspeech2025.org/registration)!
7
+
```
8
+
9
+
(interspeech-programme)=
10
+
## Preliminary programme at Interspeech 2025
11
+
12
+
> **Introduction** to challenges of speech data for interpretability research <br>
Pre-trained foundation models have revolutionized speech technology like many other adjacent fields. The combination of their capability and opacity has sparked interest in researchers trying to interpret the models in various ways. While interpretability in fields such as computer vision and natural language processing has made significant progress towards understanding model internals and explaining their decisions, speech technology has lagged behind despite the widespread use of complex, black-box neural models. Recent studies have begun to address this gap, marked by a growing body of literature focused on interpretability in the speech domain. This tutorial provides a structured overview of interpretability techniques, their applications, implications, and limitations when applied to speech models, aiming to help researchers and practitioners better understand, evaluate, debug, and optimize speech models while building trust in their predictions. In hands-on sessions, participants will explore how speech models encode distinct features (e.g., linguistic information) and utilize them in their inference. By the end, attendees will be equipped with the tools and knowledge to start analyzing and interpreting speech models in their own research, potentially inspiring new directions.
4
+
5
+
```{note}
6
+
We will present our tutorial about _Interpretability Techniques for Speech Models_ on **Sunday, August 17th** at this year's Interspeech conference in Rotterdam. <br> Check out the [preliminary programme](#preliminary-programme) below, and sign up through the [Interspeech registration form](https://www.interspeech2025.org/registration)!
7
+
```
8
+
9
+
## Preliminary programme at Interspeech 2025
10
+
11
+
> **Introduction** to challenges of speech data for interpretability research <br>
> Tutorial on **Feature Importance Scoring methods** for speech model interpretability <br> incl. Context-mixing (Attention, Value-Zeroing) and Feature attribution (Gradient- and Perturbation-based methods) <br>
"Jupyter Book also lets you write text-based notebooks using MyST Markdown.\n",
11
+
"See [the Notebooks with MyST Markdown documentation](https://jupyterbook.org/file-types/myst-notebooks.html) for more detailed instructions.\n",
12
+
"This page shows off a notebook written in MyST Markdown.\n",
13
+
"\n",
14
+
"## An example cell\n",
15
+
"\n",
16
+
"With MyST Markdown, you can define code cells with a directive like so:"
17
+
]
18
+
},
19
+
{
20
+
"cell_type": "code",
21
+
"execution_count": null,
22
+
"id": "19b2d938",
23
+
"metadata": {},
24
+
"outputs": [],
25
+
"source": [
26
+
"print(2 + 2)"
27
+
]
28
+
},
29
+
{
30
+
"cell_type": "markdown",
31
+
"id": "a2b7492b",
32
+
"metadata": {},
33
+
"source": [
34
+
"When your book is built, the contents of any `{code-cell}` blocks will be\n",
35
+
"executed with your default Jupyter kernel, and their outputs will be displayed\n",
36
+
"in-line with the rest of your content.\n",
37
+
"\n",
38
+
"```{seealso}\n",
39
+
"Jupyter Book uses [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert text-based files to notebooks, and can support [many other text-based notebook files](https://jupyterbook.org/file-types/jupytext.html).\n",
40
+
"```\n",
41
+
"\n",
42
+
"## Create a notebook with MyST Markdown\n",
43
+
"\n",
44
+
"MyST Markdown notebooks are defined by two things:\n",
45
+
"\n",
46
+
"1. YAML metadata that is needed to understand if / how it should convert text files to notebooks (including information about the kernel needed).\n",
47
+
" See the YAML at the top of this page for example.\n",
48
+
"2. The presence of `{code-cell}` directives, which will be executed with your book.\n",
49
+
"\n",
50
+
"That's all that is needed to get started!\n",
51
+
"\n",
52
+
"## Quickly add YAML metadata for MyST Notebooks\n",
53
+
"\n",
54
+
"If you have a markdown file and you'd like to quickly add YAML metadata to it, so that Jupyter Book will treat it as a MyST Markdown Notebook, run the following command:\n",
Jupyter Book also lets you write text-based notebooks using MyST Markdown.
18
+
See [the Notebooks with MyST Markdown documentation](https://jupyterbook.org/file-types/myst-notebooks.html) for more detailed instructions.
19
+
This page shows off a notebook written in MyST Markdown.
20
+
21
+
## An example cell
22
+
23
+
With MyST Markdown, you can define code cells with a directive like so:
24
+
25
+
```{code-cell}
26
+
print(2 + 2)
27
+
```
28
+
29
+
When your book is built, the contents of any `{code-cell}` blocks will be
30
+
executed with your default Jupyter kernel, and their outputs will be displayed
31
+
in-line with the rest of your content.
32
+
33
+
```{seealso}
34
+
Jupyter Book uses [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert text-based files to notebooks, and can support [many other text-based notebook files](https://jupyterbook.org/file-types/jupytext.html).
35
+
```
36
+
37
+
## Create a notebook with MyST Markdown
38
+
39
+
MyST Markdown notebooks are defined by two things:
40
+
41
+
1. YAML metadata that is needed to understand if / how it should convert text files to notebooks (including information about the kernel needed).
42
+
See the YAML at the top of this page for example.
43
+
2. The presence of `{code-cell}` directives, which will be executed with your book.
44
+
45
+
That's all that is needed to get started!
46
+
47
+
## Quickly add YAML metadata for MyST Notebooks
48
+
49
+
If you have a markdown file and you'd like to quickly add YAML metadata to it, so that Jupyter Book will treat it as a MyST Markdown Notebook, run the following command:
0 commit comments