• Y
  • List All
  • Feedback
    • This Project
    • All Projects
Profile Account settings Log out
  • Favorite
  • Project
  • All
Loading...
  • Log in
  • Sign up
yjyoon / whisper_server_speaches star
  • Project homeH
  • CodeC
  • IssueI
  • Pull requestP
  • Review R
  • MilestoneM
  • BoardB
  • Files
  • Commit
  • Branches
whisper_server_speachessrcspeachestranscriber.py
Download as .zip file
File name
Commit message
Commit date
.github/workflows
feat: switch to ghcr.io
01-10
configuration
feat: add instrumentation
2024-12-17
docs
rename to `speaches`
01-12
examples
rename to `speaches`
01-12
scripts
chore: misc changes
2024-10-03
src/speaches
rename to `speaches`
01-12
tests
rename to `speaches`
01-12
.dockerignore
chore: update .dockerignore
2024-11-01
.envrc
init
2024-05-20
.gitattributes
chore(deps): update pre-commit hook astral-sh/ruff-pre-commit to v0.7.2
2024-11-02
.gitignore
chore: update .gitignore
2024-07-03
.pre-commit-config.yaml
docs: usage pages (and more)
01-12
Dockerfile
rename to `speaches`
01-12
LICENSE
init
2024-05-20
README.md
rename to `speaches`
01-12
Taskfile.yaml
rename to `speaches`
01-12
audio.wav
chore: update volume names and mount points
01-10
compose.cpu.yaml
rename to `speaches`
01-12
compose.cuda-cdi.yaml
rename to `speaches`
01-12
compose.cuda.yaml
rename to `speaches`
01-12
compose.observability.yaml
rename to `speaches`
01-12
compose.yaml
rename to `speaches`
01-12
flake.lock
deps: update flake
2024-11-01
flake.nix
chore(deps): add loki and tempo package to flake
2024-12-17
mkdocs.yml
rename to `speaches`
01-12
pyproject.toml
rename to `speaches`
01-12
renovate.json
feat: renovate handle pre-commit
2024-11-01
uv.lock
rename to `speaches`
01-12
File name
Commit message
Commit date
routers
rename to `speaches`
01-12
__init__.py
rename to `speaches`
01-12
api_models.py
rename to `speaches`
01-12
asr.py
rename to `speaches`
01-12
audio.py
rename to `speaches`
01-12
config.py
rename to `speaches`
01-12
dependencies.py
rename to `speaches`
01-12
gradio_app.py
rename to `speaches`
01-12
hf_utils.py
rename to `speaches`
01-12
logger.py
rename to `speaches`
01-12
main.py
rename to `speaches`
01-12
model_manager.py
rename to `speaches`
01-12
text_utils.py
rename to `speaches`
01-12
text_utils_test.py
rename to `speaches`
01-12
transcriber.py
rename to `speaches`
01-12
Fedir Zadniprovskyi 01-12 43cc67a rename to `speaches` UNIX
Raw Open in browser Change history
from __future__ import annotations import logging from typing import TYPE_CHECKING from speaches.audio import Audio, AudioStream from speaches.text_utils import Transcription, common_prefix, to_full_sentences, word_to_text if TYPE_CHECKING: from collections.abc import AsyncGenerator from speaches.api_models import TranscriptionWord from speaches.asr import FasterWhisperASR logger = logging.getLogger(__name__) class LocalAgreement: def __init__(self) -> None: self.unconfirmed = Transcription() def merge(self, confirmed: Transcription, incoming: Transcription) -> list[TranscriptionWord]: # https://github.com/ufal/whisper_streaming/blob/main/whisper_online.py#L264 incoming = incoming.after(confirmed.end - 0.1) prefix = common_prefix(incoming.words, self.unconfirmed.words) logger.debug(f"Confirmed: {confirmed.text}") logger.debug(f"Unconfirmed: {self.unconfirmed.text}") logger.debug(f"Incoming: {incoming.text}") if len(incoming.words) > len(prefix): self.unconfirmed = Transcription(incoming.words[len(prefix) :]) else: self.unconfirmed = Transcription() return prefix # TODO: needs a better name def needs_audio_after(confirmed: Transcription) -> float: full_sentences = to_full_sentences(confirmed.words) return full_sentences[-1][-1].end if len(full_sentences) > 0 else 0.0 def prompt(confirmed: Transcription) -> str | None: sentences = to_full_sentences(confirmed.words) return word_to_text(sentences[-1]) if len(sentences) > 0 else None async def audio_transcriber( asr: FasterWhisperASR, audio_stream: AudioStream, min_duration: float, ) -> AsyncGenerator[Transcription, None]: local_agreement = LocalAgreement() full_audio = Audio() confirmed = Transcription() async for chunk in audio_stream.chunks(min_duration): full_audio.extend(chunk) audio = full_audio.after(needs_audio_after(confirmed)) transcription, _ = await asr.transcribe(audio, prompt(confirmed)) new_words = local_agreement.merge(confirmed, transcription) if len(new_words) > 0: confirmed.extend(new_words) yield confirmed logger.debug("Flushing...") confirmed.extend(local_agreement.unconfirmed.words) yield confirmed logger.info("Audio transcriber finished")

          
        
    
    
Copyright Yona authors & © NAVER Corp. & NAVER LABS Supported by NAVER CLOUD PLATFORM

or
Sign in with github login with Google Sign in with Google
Reset password | Sign up