

docs: add examples, roadmap, etc.
@9f5626715013ee033938dcc69d3a1009fdc59ca3
--- Dockerfile.cpu
+++ Dockerfile.cpu
... | ... | @@ -12,5 +12,8 @@ |
12 | 12 |
COPY ./speaches ./speaches |
13 | 13 |
ENTRYPOINT ["poetry", "run"] |
14 | 14 |
CMD ["uvicorn", "speaches.main:app"] |
15 |
-ENV MODEL_SIZE=distil-small.en |
|
16 |
-ENV DEVICE=cpu |
|
15 |
+ENV WHISPER_MODEL=distil-small.en |
|
16 |
+ENV WHISPER_INFERENCE_DEVICE=cpu |
|
17 |
+ENV WHISPER_COMPUTE_TYPE=int8 |
|
18 |
+ENV UVICORN_HOST=0.0.0.0 |
|
19 |
+ENV UVICORN_PORT=8000 |
--- Dockerfile.cuda
+++ Dockerfile.cuda
... | ... | @@ -12,5 +12,7 @@ |
12 | 12 |
COPY ./speaches ./speaches |
13 | 13 |
ENTRYPOINT ["poetry", "run"] |
14 | 14 |
CMD ["uvicorn", "speaches.main:app"] |
15 |
-ENV MODEL_SIZE=distil-medium.en |
|
16 |
-ENV DEVICE=cuda |
|
15 |
+ENV WHISPER_MODEL=distil-medium.en |
|
16 |
+ENV WHISPER_INFERENCE_DEVICE=cuda |
|
17 |
+ENV UVICORN_HOST=0.0.0.0 |
|
18 |
+ENV UVICORN_PORT=8000 |
--- README.md
+++ README.md
... | ... | @@ -1,25 +1,51 @@ |
1 |
-# WARN: WIP(code is ugly, may have bugs, test files aren't included, etc.) |
|
1 |
+# WARN: WIP (code is ugly, bad documentation, may have bugs, test files aren't included, CPU inference was barely tested, etc.) |
|
2 | 2 |
# Intro |
3 |
-`speaches` is a webserver that supports real-time transcription using WebSockets. |
|
4 |
-- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference is supported. |
|
5 |
-- LocalAgreement2([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf)|[original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription. |
|
6 |
-- Can be deployed using Docker (Compose configuration can be found in (compose.yaml[./compose.yaml])). |
|
3 |
+:peach:`speaches` is a web server that supports real-time transcription using WebSockets. |
|
4 |
+- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference are supported. |
|
5 |
+- LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription. |
|
6 |
+- Can be deployed using Docker (Compose configuration can be found in [compose.yaml](./compose.yaml)). |
|
7 | 7 |
- All configuration is done through environment variables. See [config.py](./speaches/config.py). |
8 | 8 |
- NOTE: only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported. |
9 |
-- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications |
|
9 |
+- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications. |
|
10 | 10 |
Please create an issue if you find a bug, have a question, or a feature suggestion. |
11 | 11 |
# Quick Start |
12 |
-NOTE: You'll need to install [websocat](https://github.com/vi/websocat?tab=readme-ov-file#installation) or an alternative. |
|
13 |
-Spinning up a `speaches` web-server |
|
12 |
+Spinning up a `speaches` web server |
|
14 | 13 |
```bash |
15 |
-docker run --detach --gpus=all --publish 8000:8000 --mount ~/.cache/huggingface:/root/.cache/huggingface --name speaches fedirz/speaches:cuda |
|
14 |
+docker run --gpus=all --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cuda |
|
16 | 15 |
# or |
17 |
-docker run --detach --publish 8000:8000 --mount ~/.cache/huggingface:/root/.cache/huggingface --name speaches fedirz/speaches:cpu |
|
16 |
+docker run --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cpu |
|
18 | 17 |
``` |
19 |
-Sending audio data via websocket |
|
18 |
+Streaming audio data from a microphone. [websocat](https://github.com/vi/websocat?tab=readme-ov-file#installation) installation is required. |
|
20 | 19 |
```bash |
21 |
-arecord -f S16_LE -c1 -r 16000 -t raw -D default | websocat --binary ws://localhost:8000/v1/audio/transcriptions |
|
20 |
+ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions |
|
22 | 21 |
# or |
23 |
-ffmpeg -f alsa -ac 1 -ar 16000 -sample_fmt s16le -i default | websocat --binary ws://localhost:8000/v1/audio/transcriptions |
|
22 |
+arecord -f S16_LE -c1 -r 16000 -t raw -D default 2>/dev/null | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions |
|
24 | 23 |
``` |
25 |
-# Example |
|
24 |
+Streaming audio data from a file. |
|
25 |
+```bash |
|
26 |
+ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - > output.raw |
|
27 |
+# send all data at once |
|
28 |
+cat output.raw | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions |
|
29 |
+# Output: {"text":"One,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five."}% |
|
30 |
+# streaming 16000 samples per second. each sample is 2 bytes |
|
31 |
+cat output.raw | pv -qL 32000 | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions |
|
32 |
+# Output: {"text":"One,"}{"text":"One, two,"}{"text":"One, two, three,"}{"text":"One, two, three, four, five."}{"text":"One, two, three, four, five. one."}% |
|
33 |
+``` |
|
34 |
+Transcribing a file |
|
35 |
+```bash |
|
36 |
+# convert the file if it has a different format |
|
37 |
+ffmpeg -i output.wav -ac 1 -ar 16000 -f s16le output.raw |
|
38 |
+curl -X POST -F "file=@output.raw" http://0.0.0.0:8000/v1/audio/transcriptions |
|
39 |
+# Output: "{\"text\":\"One, two, three, four, five.\"}"% |
|
40 |
+``` |
|
41 |
+# Roadmap |
|
42 |
+- [ ] Support file transcription (non-streaming) of multiple formats. |
|
43 |
+- [ ] CLI client. |
|
44 |
+- [ ] Separate the web server related code from the "core", and publish "core" as a package. |
|
45 |
+- [ ] Additional documentation and code comments. |
|
46 |
+- [ ] Write benchmarks for measuring streaming transcription performance. Possible metrics: |
|
47 |
+ - Latency (time when transcription is sent - time between when audio has been received) |
|
48 |
+ - Accuracy (already being measured when testing but the process can be improved) |
|
49 |
+ - Total seconds of audio transcribed / audio duration (since each audio chunk is being processed at least twice) |
|
50 |
+- [ ] Get the API response closer to the format used by OpenAI. |
|
51 |
+- [ ] Integrations... |
--- compose.yaml
+++ compose.yaml
... | ... | @@ -11,8 +11,6 @@ |
11 | 11 |
restart: unless-stopped |
12 | 12 |
ports: |
13 | 13 |
- 8000:8000 |
14 |
- environment: |
|
15 |
- - INFERENCE_DEVICE=cuda |
|
16 | 14 |
deploy: |
17 | 15 |
resources: |
18 | 16 |
reservations: |
... | ... | @@ -30,5 +28,3 @@ |
30 | 28 |
restart: unless-stopped |
31 | 29 |
ports: |
32 | 30 |
- 8000:8000 |
33 |
- environment: |
|
34 |
- - INFERENCE_DEVICE=cpu |
--- flake.nix
+++ flake.nix
... | ... | @@ -23,6 +23,7 @@ |
23 | 23 |
lsyncd |
24 | 24 |
poetry |
25 | 25 |
pre-commit |
26 |
+ pv |
|
26 | 27 |
pyright |
27 | 28 |
python311 |
28 | 29 |
websocat |
--- speaches/config.py
+++ speaches/config.py
... | ... | @@ -37,6 +37,7 @@ |
37 | 37 |
|
38 | 38 |
|
39 | 39 |
# https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md |
40 |
+# NOTE: `Precision` might be a better name |
|
40 | 41 |
class Quantization(enum.StrEnum): |
41 | 42 |
INT8 = "int8" |
42 | 43 |
INT8_FLOAT16 = "int8_float16" |
... | ... | @@ -153,24 +154,35 @@ |
153 | 154 |
|
154 | 155 |
|
155 | 156 |
class WhisperConfig(BaseModel): |
156 |
- model: Model = Field(default=Model.DISTIL_SMALL_EN) |
|
157 |
- inference_device: Device = Field(default=Device.AUTO) |
|
158 |
- compute_type: Quantization = Field(default=Quantization.DEFAULT) |
|
157 |
+ model: Model = Field(default=Model.DISTIL_SMALL_EN) # ENV: WHISPER_MODEL |
|
158 |
+ inference_device: Device = Field( |
|
159 |
+ default=Device.AUTO |
|
160 |
+ ) # ENV: WHISPER_INFERENCE_DEVICE |
|
161 |
+ compute_type: Quantization = Field( |
|
162 |
+ default=Quantization.DEFAULT |
|
163 |
+ ) # ENV: WHISPER_COMPUTE_TYPE |
|
159 | 164 |
|
160 | 165 |
|
161 | 166 |
class Config(BaseSettings): |
162 | 167 |
model_config = SettingsConfigDict(env_nested_delimiter="_") |
163 | 168 |
|
164 |
- log_level: str = "info" |
|
165 |
- whisper: WhisperConfig = WhisperConfig() |
|
169 |
+ log_level: str = "info" # ENV: LOG_LEVEL |
|
170 |
+ whisper: WhisperConfig = WhisperConfig() # ENV: WHISPER_* |
|
166 | 171 |
""" |
167 |
- Max duration to for the next audio chunk before finilizing the transcription and closing the connection. |
|
172 |
+ Max duration to for the next audio chunk before transcription is finilized and connection is closed. |
|
168 | 173 |
""" |
169 |
- max_no_data_seconds: float = 1.0 |
|
170 |
- min_duration: float = 1.0 |
|
171 |
- word_timestamp_error_margin: float = 0.2 |
|
172 |
- inactivity_window_seconds: float = 3.0 |
|
173 |
- max_inactivity_seconds: float = 1.5 |
|
174 |
+ max_no_data_seconds: float = 1.0 # ENV: MAX_NO_DATA_SECONDS |
|
175 |
+ min_duration: float = 1.0 # ENV: MIN_DURATION |
|
176 |
+ word_timestamp_error_margin: float = 0.2 # ENV: WORD_TIMESTAMP_ERROR_MARGIN |
|
177 |
+ """ |
|
178 |
+ Max allowed audio duration without any speech being detected before transcription is finilized and connection is closed. |
|
179 |
+ """ |
|
180 |
+ max_inactivity_seconds: float = 2.0 # ENV: MAX_INACTIVITY_SECONDS |
|
181 |
+ """ |
|
182 |
+ Controls how many latest seconds of audio are being passed through VAD. |
|
183 |
+ Should be greater than `max_inactivity_seconds` |
|
184 |
+ """ |
|
185 |
+ inactivity_window_seconds: float = 3.0 # ENV: INACTIVITY_WINDOW_SECONDS |
|
174 | 186 |
|
175 | 187 |
|
176 | 188 |
config = Config() |
--- speaches/main.py
+++ speaches/main.py
... | ... | @@ -90,6 +90,8 @@ |
90 | 90 |
audio_stream.duration - config.inactivity_window_seconds |
91 | 91 |
) |
92 | 92 |
vad_opts = VadOptions(min_silence_duration_ms=500, speech_pad_ms=0) |
93 |
+ # NOTE: This is a synchronous operation that runs every time new data is received. |
|
94 |
+ # This shouldn't be an issue unless data is being received in tiny chunks or the user's machine is a potato. |
|
93 | 95 |
timestamps = get_speech_timestamps(audio.data, vad_opts) |
94 | 96 |
if len(timestamps) == 0: |
95 | 97 |
logger.info( |
... | ... | @@ -143,7 +145,6 @@ |
143 | 145 |
tg.create_task(audio_receiver(ws, audio_stream)) |
144 | 146 |
async for transcription in audio_transcriber(asr, audio_stream): |
145 | 147 |
logger.debug(f"Sending transcription: {transcription.text}") |
146 |
- # Or should it be |
|
147 | 148 |
if ws.client_state == WebSocketState.DISCONNECTED: |
148 | 149 |
break |
149 | 150 |
await ws.send_text(format_transcription(transcription, response_format)) |
Add a comment
Delete comment
Once you delete this comment, you won't be able to recover it. Are you sure you want to delete this comment?