• Y
  • List All
  • Feedback
    • This Project
    • All Projects
Profile Account settings Log out
  • Favorite
  • Project
  • All
Loading...
  • Log in
  • Sign up
yjyoon / whisper_server_speaches star
  • Project homeH
  • CodeC
  • IssueI
  • Pull requestP
  • Review R
  • MilestoneM
  • BoardB
  • Files
  • Commit
  • Branches
whisper_server_speachessrcspeachesasr.py
Download as .zip file
File name
Commit message
Commit date
.github/workflows
feat: switch to ghcr.io
01-10
configuration
feat: add instrumentation
2024-12-17
docs
rename to `speaches`
01-12
examples
rename to `speaches`
01-12
scripts
chore: misc changes
2024-10-03
src/speaches
rename to `speaches`
01-12
tests
rename to `speaches`
01-12
.dockerignore
fix: .dockerignore
01-12
.envrc
init
2024-05-20
.gitattributes
chore(deps): update pre-commit hook astral-sh/ruff-pre-commit to v0.7.2
2024-11-02
.gitignore
chore: update .gitignore
2024-07-03
.pre-commit-config.yaml
chore(deps): update pre-commit hook python-jsonschema/check-jsonschema to v0.31.0
01-12
Dockerfile
chore(deps): update ghcr.io/astral-sh/uv docker tag to v0.5.18
01-12
LICENSE
init
2024-05-20
README.md
rename to `speaches`
01-12
Taskfile.yaml
rename to `speaches`
01-12
audio.wav
chore: update volume names and mount points
01-10
compose.cpu.yaml
rename to `speaches`
01-12
compose.cuda-cdi.yaml
rename to `speaches`
01-12
compose.cuda.yaml
rename to `speaches`
01-12
compose.observability.yaml
chore(deps): update otel/opentelemetry-collector-contrib docker tag to v0.117.0
01-12
compose.yaml
rename to `speaches`
01-12
flake.lock
deps: update flake
2024-11-01
flake.nix
chore(deps): add loki and tempo package to flake
2024-12-17
mkdocs.yml
rename to `speaches`
01-12
pyproject.toml
rename to `speaches`
01-12
renovate.json
feat: renovate handle pre-commit
2024-11-01
uv.lock
rename to `speaches`
01-12
File name
Commit message
Commit date
routers
rename to `speaches`
01-12
__init__.py
rename to `speaches`
01-12
api_models.py
rename to `speaches`
01-12
asr.py
rename to `speaches`
01-12
audio.py
rename to `speaches`
01-12
config.py
rename to `speaches`
01-12
dependencies.py
rename to `speaches`
01-12
gradio_app.py
rename to `speaches`
01-12
hf_utils.py
rename to `speaches`
01-12
logger.py
rename to `speaches`
01-12
main.py
rename to `speaches`
01-12
model_manager.py
rename to `speaches`
01-12
text_utils.py
rename to `speaches`
01-12
text_utils_test.py
rename to `speaches`
01-12
transcriber.py
rename to `speaches`
01-12
Fedir Zadniprovskyi 01-12 f3802b7 rename to `speaches` UNIX
Raw Open in browser Change history
from __future__ import annotations import asyncio import logging import time from typing import TYPE_CHECKING from speaches.api_models import TranscriptionSegment, TranscriptionWord from speaches.text_utils import Transcription if TYPE_CHECKING: from faster_whisper import transcribe from speaches.audio import Audio logger = logging.getLogger(__name__) class FasterWhisperASR: def __init__( self, whisper: transcribe.WhisperModel, **kwargs, ) -> None: self.whisper = whisper self.transcribe_opts = kwargs def _transcribe( self, audio: Audio, prompt: str | None = None, ) -> tuple[Transcription, transcribe.TranscriptionInfo]: start = time.perf_counter() # NOTE: should `BatchedInferencePipeline` be used here? segments, transcription_info = self.whisper.transcribe( audio.data, initial_prompt=prompt, word_timestamps=True, **self.transcribe_opts, ) segments = TranscriptionSegment.from_faster_whisper_segments(segments) words = TranscriptionWord.from_segments(segments) for word in words: word.offset(audio.start) transcription = Transcription(words) end = time.perf_counter() logger.info( f"Transcribed {audio} in {end - start:.2f} seconds. Prompt: {prompt}. Transcription: {transcription.text}" ) return (transcription, transcription_info) async def transcribe( self, audio: Audio, prompt: str | None = None, ) -> tuple[Transcription, transcribe.TranscriptionInfo]: """Wrapper around _transcribe so it can be used in async context.""" # is this the optimal way to execute a blocking call in an async context? # TODO: verify performance when running inference on a CPU return await asyncio.get_running_loop().run_in_executor( None, self._transcribe, audio, prompt, )

          
        
    
    
Copyright Yona authors & © NAVER Corp. & NAVER LABS Supported by NAVER CLOUD PLATFORM

or
Sign in with github login with Google Sign in with Google
Reset password | Sign up