• Y
  • List All
  • Feedback
    • This Project
    • All Projects
Profile Account settings Log out
  • Favorite
  • Project
  • All
Loading...
  • Log in
  • Sign up
yjyoon / whisper_server_speaches star
  • Project homeH
  • CodeC
  • IssueI
  • Pull requestP
  • Review R
  • MilestoneM
  • BoardB
  • Files
  • Commit
  • Branches
whisper_server_speachescompose.yaml
Download as .zip file
File name
Commit message
Commit date
.github/workflows
Revert "wip"
2024-09-08
examples
Update script.sh
2024-09-03
scripts
chore: minor changes to scripts/client.py
2024-09-05
src/speaches
chore: set root logger level to info
2024-09-11
tests
deps: add `pytest-async`
2024-09-08
.dockerignore
chore: ignore .env
2024-05-27
.envrc
init
2024-05-20
.gitattributes
docs: add live-transcription demo
2024-05-28
.gitignore
chore: update .gitignore
2024-07-03
.pre-commit-config.yaml
switch to basedpyright
2024-07-20
Dockerfile.cpu
chore: update default whisper model
2024-09-08
Dockerfile.cuda
chore: update default whisper model
2024-09-08
LICENSE
init
2024-05-20
README.md
add tutorial for kubernetes
2024-09-04
Taskfile.yaml
feat: use `uv` package manager, pin dependencies
2024-09-08
audio.wav
docs: update README.md
2024-05-27
compose.yaml
chore: format compose
2024-09-11
flake.lock
deps: update flake
2024-09-08
flake.nix
deps: update flake
2024-09-08
pyproject.toml
fix: gradio pydantic error
2024-09-11
uv.lock
fix: gradio pydantic error
2024-09-11
Fedir Zadniprovskyi 2024-09-11 5862514 chore: format compose UNIX
Raw Open in browser Change history
# TODO: https://docs.astral.sh/uv/guides/integration/docker/#configuring-watch-with-docker-compose services: faster-whisper-server-cuda: image: fedirz/faster-whisper-server:latest-cuda build: dockerfile: Dockerfile.cuda context: . platforms: - linux/amd64 - linux/arm64 restart: unless-stopped ports: - 8000:8000 volumes: - hugging_face_cache:/root/.cache/huggingface develop: watch: - path: faster_whisper_server action: rebuild deploy: resources: reservations: devices: - capabilities: ["gpu"] # If you have CDI feature enabled use the following instead # https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html # https://docs.docker.com/reference/cli/dockerd/#enable-cdi-devices # - driver: cdi # device_ids: # - nvidia.com/gpu=all faster-whisper-server-cpu: image: fedirz/faster-whisper-server:latest-cpu build: dockerfile: Dockerfile.cpu context: . platforms: - linux/amd64 - linux/arm64 restart: unless-stopped ports: - 8000:8000 volumes: - hugging_face_cache:/root/.cache/huggingface develop: watch: - path: faster_whisper_server action: rebuild volumes: hugging_face_cache:

          
        
    
    
Copyright Yona authors & © NAVER Corp. & NAVER LABS Supported by NAVER CLOUD PLATFORM

or
Sign in with github login with Google Sign in with Google
Reset password | Sign up