S. Neuhaus 2024-10-02
typos
@a58e6cdc0dcd552082377f4311d52fb5150893f6
README.md
--- README.md
+++ README.md
@@ -1,12 +1,12 @@
 # Faster Whisper Server
-`faster-whisper-server` is an OpenAI API compatible transcription server which uses [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as it's backend.
+`faster-whisper-server` is an OpenAI API-compatible transcription server which uses [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as its backend.
 Features:
 - GPU and CPU support.
 - Easily deployable using Docker.
 - **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**.
 - OpenAI API compatible.
-- Streaming support (transcription is sent via SSE as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it)
-- Live transcription support (audio is sent via websocket as it's generated)
+- Streaming support (transcription is sent via [SSE](https://en.wikipedia.org/wiki/Server-sent_events) as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it).
+- Live transcription support (audio is sent via websocket as it's generated).
 - Dynamic model loading / offloading. Just specify which model you want to use in the request and it will be loaded automatically. It will then be unloaded after a period of inactivity.
 
 Please create an issue if you find a bug, have a question, or a feature suggestion.
@@ -67,7 +67,7 @@
 print(transcript.text)
 ```
 
-### CURL
+### cURL
 ```bash
 # If `model` isn't specified, the default model is used
 curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav"
Add a comment
List