

COntributions at README.md
+ nicer formatting + #77
@d592aac4363d3be312ac92f92ca2f773c35f402c
--- README.md
+++ README.md
... | ... | @@ -3,16 +3,12 @@ |
3 | 3 |
|
4 | 4 |
**Turning Whisper into Real-Time Transcription System** |
5 | 5 |
|
6 |
-Demonstration paper, by Dominik Macháček, Raj Dabre, Ondřej Bojar, 2023 |
|
6 |
+Demonstration paper, by [Dominik Macháček](https://ufal.mff.cuni.cz/dominik-machacek), [Raj Dabre](https://prajdabre.github.io/), [Ondřej Bojar](https://ufal.mff.cuni.cz/ondrej-bojar), 2023 |
|
7 | 7 |
|
8 |
-Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference. |
|
8 |
+Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real-time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference. |
|
9 | 9 |
|
10 | 10 |
|
11 |
-Paper PDF: |
|
12 |
-https://aclanthology.org/2023.ijcnlp-demo.3.pdf |
|
13 |
- |
|
14 |
- |
|
15 |
-Demo video: https://player.vimeo.com/video/840442741 |
|
11 |
+[Paper PDF](https://aclanthology.org/2023.ijcnlp-demo.3.pdf), [Demo video](https://player.vimeo.com/video/840442741) |
|
16 | 12 |
|
17 | 13 |
[Slides](http://ufallab.ms.mff.cuni.cz/~machacek/pre-prints/AACL23-2.11.2023-Turning-Whisper-oral.pdf) -- 15 minutes oral presentation at IJCNLP-AACL 2023 |
18 | 14 |
|
... | ... | @@ -228,12 +224,20 @@ |
228 | 224 |
re-process confirmed sentence prefixes and skip them, making sure they don't |
229 | 225 |
overlap, and we limit the processing buffer window. |
230 | 226 |
|
231 |
-Contributions are welcome. |
|
232 |
- |
|
233 | 227 |
### Performance evaluation |
234 | 228 |
|
235 | 229 |
[See the paper.](http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/pdf/2023.ijcnlp-demo.3.pdf) |
236 | 230 |
|
231 |
+### Contributions |
|
232 |
+ |
|
233 |
+Contributions are welcome. We acknowledge especially: |
|
234 |
+ |
|
235 |
+- [The GitHub contributors](https://github.com/ufal/whisper_streaming/graphs/contributors) for their pull requests with new features and bugfixes. |
|
236 |
+- [The translation of this repo into Chinese.](https://github.com/Gloridust/whisper_streaming_CN) |
|
237 |
+- [Ondřej Plátek](https://opla.cz/) for the paper pre-review. |
|
238 |
+- [Peter Polák](https://ufal.mff.cuni.cz/peter-polak) for the original idea. |
|
239 |
+- The UEDIN team of the [ELITR project](https://elitr.eu) for the original line_packet.py. |
|
240 |
+ |
|
237 | 241 |
|
238 | 242 |
## Contact |
239 | 243 |
|
Add a comment
Delete comment
Once you delete this comment, you won't be able to recover it. Are you sure you want to delete this comment?