Fedir Zadniprovskyi 2024-05-20
docs: add examples, roadmap, etc.
@9f5626715013ee033938dcc69d3a1009fdc59ca3
Dockerfile.cpu
--- Dockerfile.cpu
+++ Dockerfile.cpu
@@ -12,5 +12,8 @@
 COPY ./speaches ./speaches
 ENTRYPOINT ["poetry", "run"]
 CMD ["uvicorn", "speaches.main:app"]
-ENV MODEL_SIZE=distil-small.en
-ENV DEVICE=cpu
+ENV WHISPER_MODEL=distil-small.en
+ENV WHISPER_INFERENCE_DEVICE=cpu
+ENV WHISPER_COMPUTE_TYPE=int8
+ENV UVICORN_HOST=0.0.0.0
+ENV UVICORN_PORT=8000
Dockerfile.cuda
--- Dockerfile.cuda
+++ Dockerfile.cuda
@@ -12,5 +12,7 @@
 COPY ./speaches ./speaches
 ENTRYPOINT ["poetry", "run"]
 CMD ["uvicorn", "speaches.main:app"]
-ENV MODEL_SIZE=distil-medium.en
-ENV DEVICE=cuda
+ENV WHISPER_MODEL=distil-medium.en
+ENV WHISPER_INFERENCE_DEVICE=cuda
+ENV UVICORN_HOST=0.0.0.0
+ENV UVICORN_PORT=8000
README.md
--- README.md
+++ README.md
@@ -1,25 +1,51 @@
-# WARN: WIP(code is ugly, may have bugs, test files aren't included, etc.)
+# WARN: WIP (code is ugly, bad documentation, may have bugs, test files aren't included, CPU inference was barely tested, etc.)
 # Intro
-`speaches` is a webserver that supports real-time transcription using WebSockets.
-- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference is supported.
-- LocalAgreement2([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf)|[original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription.
-- Can be deployed using Docker (Compose configuration can be found in (compose.yaml[./compose.yaml])).
+:peach:`speaches` is a web server that supports real-time transcription using WebSockets.
+- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is used as the backend. Both GPU and CPU inference are supported.
+- LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for real-time transcription.
+- Can be deployed using Docker (Compose configuration can be found in [compose.yaml](./compose.yaml)).
 - All configuration is done through environment variables. See [config.py](./speaches/config.py).
 - NOTE: only transcription of single channel, 16000 sample rate, raw, 16-bit little-endian audio is supported.
-- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications
+- NOTE: this isn't really meant to be used as a standalone tool but rather to add transcription features to other applications.
 Please create an issue if you find a bug, have a question, or a feature suggestion.
 # Quick Start
-NOTE: You'll need to install [websocat](https://github.com/vi/websocat?tab=readme-ov-file#installation) or an alternative.
-Spinning up a `speaches` web-server
+Spinning up a `speaches` web server
 ```bash
-docker run --detach --gpus=all --publish 8000:8000 --mount ~/.cache/huggingface:/root/.cache/huggingface --name speaches fedirz/speaches:cuda
+docker run --gpus=all --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cuda
 # or
-docker run --detach --publish 8000:8000 --mount ~/.cache/huggingface:/root/.cache/huggingface --name speaches fedirz/speaches:cpu
+docker run --publish 8000:8000 --mount type=bind,source=$HOME/.cache/huggingface,target=/root/.cache/huggingface fedirz/speaches:cpu
 ```
-Sending audio data via websocket
+Streaming audio data from a microphone. [websocat](https://github.com/vi/websocat?tab=readme-ov-file#installation) installation is required.
 ```bash
-arecord -f S16_LE -c1 -r 16000 -t raw -D default | websocat --binary ws://localhost:8000/v1/audio/transcriptions
+ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
 # or
-ffmpeg -f alsa -ac 1 -ar 16000 -sample_fmt s16le -i default | websocat --binary ws://localhost:8000/v1/audio/transcriptions
+arecord -f S16_LE -c1 -r 16000 -t raw -D default 2>/dev/null | websocat --binary ws://0.0.0.0:8000/v1/audio/transcriptions
 ```
-# Example
+Streaming audio data from a file.
+```bash
+ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le - > output.raw
+# send all data at once
+cat output.raw | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
+# Output: {"text":"One,"}{"text":"One,  two,  three,  four,  five."}{"text":"One,  two,  three,  four,  five."}%
+# streaming 16000 samples per second. each sample is 2 bytes
+cat output.raw | pv -qL 32000 | websocat --no-close --binary ws://0.0.0.0:8000/v1/audio/transcriptions
+# Output: {"text":"One,"}{"text":"One,  two,"}{"text":"One,  two,  three,"}{"text":"One,  two,  three,  four,  five."}{"text":"One,  two,  three,  four,  five.  one."}%
+```
+Transcribing a file
+```bash
+# convert the file if it has a different format
+ffmpeg -i output.wav -ac 1 -ar 16000 -f s16le output.raw
+curl -X POST -F "file=@output.raw" http://0.0.0.0:8000/v1/audio/transcriptions
+# Output: "{\"text\":\"One,  two,  three,  four,  five.\"}"%
+```
+# Roadmap
+- [ ] Support file transcription (non-streaming) of multiple formats.
+- [ ] CLI client.
+- [ ] Separate the web server related code from the "core", and publish "core" as a package.
+- [ ] Additional documentation and code comments.
+- [ ] Write benchmarks for measuring streaming transcription performance. Possible metrics:
+    - Latency (time when transcription is sent - time between when audio has been received)
+    - Accuracy (already being measured when testing but the process can be improved)
+    - Total seconds of audio transcribed / audio duration (since each audio chunk is being processed at least twice)
+- [ ] Get the API response closer to the format used by OpenAI.
+- [ ] Integrations...
compose.yaml
--- compose.yaml
+++ compose.yaml
@@ -11,8 +11,6 @@
     restart: unless-stopped
     ports:
       - 8000:8000
-    environment:
-      - INFERENCE_DEVICE=cuda
     deploy:
       resources:
         reservations:
@@ -30,5 +28,3 @@
     restart: unless-stopped
     ports:
       - 8000:8000
-    environment:
-      - INFERENCE_DEVICE=cpu
flake.nix
--- flake.nix
+++ flake.nix
@@ -23,6 +23,7 @@
               lsyncd
               poetry
               pre-commit
+              pv
               pyright
               python311
               websocat
speaches/config.py
--- speaches/config.py
+++ speaches/config.py
@@ -37,6 +37,7 @@
 
 
 # https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md
+# NOTE: `Precision` might be a better name
 class Quantization(enum.StrEnum):
     INT8 = "int8"
     INT8_FLOAT16 = "int8_float16"
@@ -153,24 +154,35 @@
 
 
 class WhisperConfig(BaseModel):
-    model: Model = Field(default=Model.DISTIL_SMALL_EN)
-    inference_device: Device = Field(default=Device.AUTO)
-    compute_type: Quantization = Field(default=Quantization.DEFAULT)
+    model: Model = Field(default=Model.DISTIL_SMALL_EN)  # ENV: WHISPER_MODEL
+    inference_device: Device = Field(
+        default=Device.AUTO
+    )  # ENV: WHISPER_INFERENCE_DEVICE
+    compute_type: Quantization = Field(
+        default=Quantization.DEFAULT
+    )  # ENV: WHISPER_COMPUTE_TYPE
 
 
 class Config(BaseSettings):
     model_config = SettingsConfigDict(env_nested_delimiter="_")
 
-    log_level: str = "info"
-    whisper: WhisperConfig = WhisperConfig()
+    log_level: str = "info"  # ENV: LOG_LEVEL
+    whisper: WhisperConfig = WhisperConfig()  # ENV: WHISPER_*
     """
-    Max duration to for the next audio chunk before finilizing the transcription and closing the connection.
+    Max duration to for the next audio chunk before transcription is finilized and connection is closed.
     """
-    max_no_data_seconds: float = 1.0
-    min_duration: float = 1.0
-    word_timestamp_error_margin: float = 0.2
-    inactivity_window_seconds: float = 3.0
-    max_inactivity_seconds: float = 1.5
+    max_no_data_seconds: float = 1.0  # ENV: MAX_NO_DATA_SECONDS
+    min_duration: float = 1.0  # ENV: MIN_DURATION
+    word_timestamp_error_margin: float = 0.2  # ENV: WORD_TIMESTAMP_ERROR_MARGIN
+    """
+    Max allowed audio duration without any speech being detected before transcription is finilized and connection is closed.
+    """
+    max_inactivity_seconds: float = 2.0  # ENV: MAX_INACTIVITY_SECONDS
+    """
+    Controls how many latest seconds of audio are being passed through VAD.
+    Should be greater than `max_inactivity_seconds`
+    """
+    inactivity_window_seconds: float = 3.0  # ENV: INACTIVITY_WINDOW_SECONDS
 
 
 config = Config()
speaches/main.py
--- speaches/main.py
+++ speaches/main.py
@@ -90,6 +90,8 @@
                     audio_stream.duration - config.inactivity_window_seconds
                 )
                 vad_opts = VadOptions(min_silence_duration_ms=500, speech_pad_ms=0)
+                # NOTE: This is a synchronous operation that runs every time new data is received.
+                # This shouldn't be an issue unless data is being received in tiny chunks or the user's machine is a potato.
                 timestamps = get_speech_timestamps(audio.data, vad_opts)
                 if len(timestamps) == 0:
                     logger.info(
@@ -143,7 +145,6 @@
         tg.create_task(audio_receiver(ws, audio_stream))
         async for transcription in audio_transcriber(asr, audio_stream):
             logger.debug(f"Sending transcription: {transcription.text}")
-            # Or should it be
             if ws.client_state == WebSocketState.DISCONNECTED:
                 break
             await ws.send_text(format_transcription(transcription, response_format))
Add a comment
List