

typos
@a58e6cdc0dcd552082377f4311d52fb5150893f6
--- README.md
+++ README.md
... | ... | @@ -1,12 +1,12 @@ |
1 | 1 |
# Faster Whisper Server |
2 |
-`faster-whisper-server` is an OpenAI API compatible transcription server which uses [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as it's backend. |
|
2 |
+`faster-whisper-server` is an OpenAI API-compatible transcription server which uses [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as its backend. |
|
3 | 3 |
Features: |
4 | 4 |
- GPU and CPU support. |
5 | 5 |
- Easily deployable using Docker. |
6 | 6 |
- **Configurable through environment variables (see [config.py](./src/faster_whisper_server/config.py))**. |
7 | 7 |
- OpenAI API compatible. |
8 |
-- Streaming support (transcription is sent via SSE as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it) |
|
9 |
-- Live transcription support (audio is sent via websocket as it's generated) |
|
8 |
+- Streaming support (transcription is sent via [SSE](https://en.wikipedia.org/wiki/Server-sent_events) as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it). |
|
9 |
+- Live transcription support (audio is sent via websocket as it's generated). |
|
10 | 10 |
- Dynamic model loading / offloading. Just specify which model you want to use in the request and it will be loaded automatically. It will then be unloaded after a period of inactivity. |
11 | 11 |
|
12 | 12 |
Please create an issue if you find a bug, have a question, or a feature suggestion. |
... | ... | @@ -67,7 +67,7 @@ |
67 | 67 |
print(transcript.text) |
68 | 68 |
``` |
69 | 69 |
|
70 |
-### CURL |
|
70 |
+### cURL |
|
71 | 71 |
```bash |
72 | 72 |
# If `model` isn't specified, the default model is used |
73 | 73 |
curl http://localhost:8000/v1/audio/transcriptions -F "file=@audio.wav" |
Add a comment
Delete comment
Once you delete this comment, you won't be able to recover it. Are you sure you want to delete this comment?