• Y
  • List All
  • Feedback
    • This Project
    • All Projects
Profile Account settings Log out
  • Favorite
  • Project
  • All
Loading...
  • Log in
  • Sign up
yjyoon / whisper_server_speaches star
  • Project homeH
  • CodeC
  • IssueI
  • Pull requestP
  • Review R
  • MilestoneM
  • BoardB
  • Files
  • Commit
  • Branches
whisper_server_speachesspeachestranscriber.py
Download as .zip file
File name
Commit message
Commit date
.github/workflows
feat: add gha workflow for building and pushing docker images
2024-05-27
pre-commit-scripts
feat: add more pre-commit hooks
2024-05-27
speaches
feat: add more pre-commit hooks
2024-05-27
tests
feat: add more pre-commit hooks
2024-05-27
.dockerignore
feat: add gha workflow for building and pushing docker images
2024-05-27
.envrc
init
2024-05-20
.gitignore
feat: add gha workflow for building and pushing docker images
2024-05-27
.pre-commit-config.yaml
feat: add more pre-commit hooks
2024-05-27
Dockerfile.cpu
feat: add more pre-commit hooks
2024-05-27
Dockerfile.cuda
feat: add more pre-commit hooks
2024-05-27
LICENSE
init
2024-05-20
README.md
docs: add examples, roadmap, etc.
2024-05-21
Taskfile.yaml
fix: circular import
2024-05-26
compose.yaml
fix: docker multi-arch builds
2024-05-23
flake.lock
init
2024-05-20
flake.nix
feat: add gha workflow for building and pushing docker images
2024-05-27
poetry.lock
deps: add youtube-dl as dev dependency
2024-05-25
pyproject.toml
feat: add more pre-commit hooks
2024-05-27
File name
Commit message
Commit date
__init__.py
init
2024-05-20
asr.py
feat: further improve openai compatabilit + refactor
2024-05-25
audio.py
style: add ruff
2024-05-21
config.py
feat: support loading multiple models
2024-05-27
core.py
style: add ruff
2024-05-21
logger.py
init
2024-05-20
main.py
feat: add more pre-commit hooks
2024-05-27
server_models.py
feat: add more pre-commit hooks
2024-05-27
transcriber.py
init
2024-05-20
utils.py
feat: further improve openai compatabilit + refactor
2024-05-25
Fedir Zadniprovskyi 2024-05-20 d0feed8 init UNIX
Raw Open in browser Change history
from __future__ import annotations from typing import AsyncGenerator from speaches.asr import FasterWhisperASR from speaches.audio import Audio, AudioStream from speaches.config import config from speaches.core import Transcription, Word, common_prefix, to_full_sentences from speaches.logger import logger class LocalAgreement: def __init__(self) -> None: self.unconfirmed = Transcription() def merge(self, confirmed: Transcription, incoming: Transcription) -> list[Word]: # https://github.com/ufal/whisper_streaming/blob/main/whisper_online.py#L264 incoming = incoming.after(confirmed.end - 0.1) prefix = common_prefix(incoming.words, self.unconfirmed.words) logger.debug(f"Confirmed: {confirmed.text}") logger.debug(f"Unconfirmed: {self.unconfirmed.text}") logger.debug(f"Incoming: {incoming.text}") if len(incoming.words) > len(prefix): self.unconfirmed = Transcription(incoming.words[len(prefix) :]) else: self.unconfirmed = Transcription() return prefix @classmethod def prompt(cls, confirmed: Transcription) -> str | None: sentences = to_full_sentences(confirmed.words) if len(sentences) == 0: return None return sentences[-1].text # TODO: better name @classmethod def needs_audio_after(cls, confirmed: Transcription) -> float: full_sentences = to_full_sentences(confirmed.words) return full_sentences[-1].end if len(full_sentences) > 0 else 0.0 def needs_audio_after(confirmed: Transcription) -> float: full_sentences = to_full_sentences(confirmed.words) return full_sentences[-1].end if len(full_sentences) > 0 else 0.0 def prompt(confirmed: Transcription) -> str | None: sentences = to_full_sentences(confirmed.words) if len(sentences) == 0: return None return sentences[-1].text async def audio_transcriber( asr: FasterWhisperASR, audio_stream: AudioStream, ) -> AsyncGenerator[Transcription, None]: local_agreement = LocalAgreement() full_audio = Audio() confirmed = Transcription() async for chunk in audio_stream.chunks(config.min_duration): full_audio.extend(chunk) audio = full_audio.after(needs_audio_after(confirmed)) transcription, _ = await asr.transcribe(audio, prompt(confirmed)) new_words = local_agreement.merge(confirmed, transcription) if len(new_words) > 0: confirmed.extend(new_words) yield confirmed logger.debug("Flushing...") confirmed.extend(local_agreement.unconfirmed.words) yield confirmed logger.info("Audio transcriber finished")

          
        
    
    
Copyright Yona authors & © NAVER Corp. & NAVER LABS Supported by NAVER CLOUD PLATFORM

or
Sign in with github login with Google Sign in with Google
Reset password | Sign up