This is really cool. FWIW, existing open-source TTS engines are really bad in comparison to what you have here: I know this is voice-to-voice, but I think there'd be a lot of appetite to get this to also be multimodal and accept text (essentially making it a really good TTS model, in addition to a great voice-to-voice model).
I suppose someone could hack their way around the problem by finetuning it to essentially replay Piper (or whatever) output, only with more natural prosody and intonation. And then have the text LLM pipe to Piper, and Piper pipe to Hertz-dev. But it would be pretty useful to have it accept text natively!
They say Hertz is first of its kind but Moshi is another duplex audio model from earlier this year that seems to perform similarly (and it runs on a MacBook):
https://github.com/kyutai-labs/moshi
Moshi never released the base model, only two conversationally finetuned models. They also never released training code except for the codec. Though I don't see any training code for Hertz either, just 3 inference notebooks, and model code full of no_grad. No paper either to help me understand how this was trained and what the architecture is like. So I'm not too sure about researcher-friendliness unless I'm missing something.
- LLaMA-Omni https://github.com/ictnlp/LLaMA-Omni a speech-language model built on Llama-3.1-8B-Instruct for simultaneous generation of text and speech
- Ichigo https://github.com/homebrewltd/ichigo open research project extending a text-based LLM to have native listening ability, using an early fusion technique
Moshi is a good model to build chat applications on, this is designed to be more of a proper base model with all the quirkiness, naturalness, and researcher-friendliness of base modeling.
Tesla’s approach to pure vision-based autonomous driving—temporarily setting aside lidar and other sensors—seems designed to make this technology more accessible and scalable. By focusing on a vision-only model, they can accelerate adoption and gather large datasets for quicker iterations. Once the vision-based system reaches a mature stage, I imagine Tesla might reintegrate additional sensor data, like lidar or radar, to refine their autonomous driving suite, making it even more robust and closer to perfection.
Additionally, I’ve been exploring an idea about voice interaction systems. Currently, most voice interactions are processed by converting voice input into text, generating a text-based response, and then turning this text back into audio. But what if we could train the system to respond directly in voice, without involving text at all? If developed to maturity, this model could produce responses that feel more natural and spontaneous, possibly diverging from traditional text-to-speech outputs. Natural speech has unique syntax and rhythm, not to mention dialect and tone variations, which could make a purely voice-trained system fascinating and more human-like.
Could you let me know if your current voice interaction model follows the standard speech-to-text-to-speech process, or if there is exploration in voice-to-voice processing?
That's really cool.
I'm currently exploring VUI (Voice User Interface) and this might come in handy.
I might be a bit biased (did my PhD exploring how VUI can persuade humans), but I think VUI is "the future" of computer interaction.
If it's not the future, than at least it adds a new group of people (kids + elderly people) as potential users.
If the authors or anyone else that works on a voice model are in here, do you ever get creeped out or feel the sounds you’re getting from the system have a physiological effect on you?
> Base models are uniquely valuable as a research product because they accurately model the distribution of the data that they were trained on, as opposed to models that have had substantial RL tuning done to collapse their generation distributions. This makes base models the best starting point to fine-tune for a large number of different tasks.
Is this idea (‘collapse of their generation distributions’) a researched topic? If so, under what name?
Sounds interesting and maybe related to the whole continual learning / how to finetune properly line of work
The voice sounds a little bit distorted, and there is often a noise in the background (especially noticeable when this noise disappears when the voice pauses). I wonder, is it model limitations or is it the problem with quality of training data?
Can one of the authors explain what this actually means from the post?
hertz-vae: a 1.8 billion parameter transformer decoder which acts as a learned prior for the audio VAE. The model uses a context of 8192 sampled latent representations (17 minutes) and predicts the next encoded audio frame as a mixture of gaussians. 15 bits of quantized information from the next token act as semantic scaffolding to steer the generation in a streamable manner.
1. `codec`: First, compress 16k samplerate audio into 8 samples per second with convolutions. Then, vector quantize to 128 bits (probably 8 floats) to get a codec. This is not nearly enough bits to actually represent the audio, it's more to represent phenomes.
2. `vae` -> This looks like a VAE-based diffusion model, that uses the codec as its prompt.
3. `dev` -> This is a next-codec prediction model.
Put together, it probably runs like so:
1. Turn your prompt into tokens with the `codec`.
2. If you want s more seconds of audio, use `dev` to predict 8 * s more tokens.
3. Turn it back into audio with the `vae` diffusion model.
Cool, looks like this is trained on 16 million hours of audio (500B tokens at ~.11 seconds per token).
Even the large open source TTS models (see F5 TTS, Mask GCT) are mostly trained on very small audio datasets (say 100k hours) relative to the amount of audio available on the internet, so it's cool to see an open source effort to scale up training significantly.
The codec parameters remind me of the ~300bps NRV military speech codec from 2010. It also uses 120ms (8hz) frames, vbr encoded using 16KHz audio (closed source though).
Some commits are by `nicholascc` (https://github.com/nicholascc); via Twitter, he seems to be Nicholas Charette. Nicholas is a first year student at Stanford. For such a young group, this is a really impressive effort!
Pay attention to the given prompt length in the examples. The first 2 seconds of the first example is a real human speaking. Everything after is generated by the model. It produces what almost sounds like real human speech mimicking the voice of the input but it's currently at a level of something like GPT-2 in terms of meaningful words.
The voice samples are speaking gibberish a lot of the time, but sonically the voices are fantastic. They sound human, even if it's nonsense syllables.
With SD and LLMs, there's a lot you can do to debug it by studying the way it responds to small changes in the prompt. But, since Hertz-dev is using sound as its input, it would be hard to discern which token you should tweak. Of course, if it's meant to be used in real time, that kind of fiddling isn't an option at all. How would you go about systematically studying Hertz-dev's behavior?
Gotta say I was confused for a second but yeah apparently si.inc and ssi.inc are the domains for two different AGI companies and I can only assume it’s intentional?
reissbaker|1 year ago
I suppose someone could hack their way around the problem by finetuning it to essentially replay Piper (or whatever) output, only with more natural prosody and intonation. And then have the text LLM pipe to Piper, and Piper pipe to Hertz-dev. But it would be pretty useful to have it accept text natively!
netdevnet|1 year ago
PenisBanana|1 year ago
It may not be _them_ doing it, though.
blixt|1 year ago
a2128|1 year ago
underlines|1 year ago
- moshi https://github.com/kyutai-labs/moshi speech-text foundation model using Mimi, a SOTA streaming neural audio codec
- Mini-Omni https://github.com/gpt-omni/mini-omni multimodal LLM based on Qwen2 offering speech input and output
- Ichigo https://github.com/homebrewltd/ichigo open research project extending a text-based LLM to have native listening ability, using an early fusion technique
nicholas-cc|1 year ago
wwwlouishinofun|1 year ago
Additionally, I’ve been exploring an idea about voice interaction systems. Currently, most voice interactions are processed by converting voice input into text, generating a text-based response, and then turning this text back into audio. But what if we could train the system to respond directly in voice, without involving text at all? If developed to maturity, this model could produce responses that feel more natural and spontaneous, possibly diverging from traditional text-to-speech outputs. Natural speech has unique syntax and rhythm, not to mention dialect and tone variations, which could make a purely voice-trained system fascinating and more human-like.
Could you let me know if your current voice interaction model follows the standard speech-to-text-to-speech process, or if there is exploration in voice-to-voice processing?
nicholas-cc|1 year ago
vanviegen|1 year ago
BrandiATMuhkuh|1 year ago
I might be a bit biased (did my PhD exploring how VUI can persuade humans), but I think VUI is "the future" of computer interaction. If it's not the future, than at least it adds a new group of people (kids + elderly people) as potential users.
tmshapland|1 year ago
wwwlouishinofun|1 year ago
jcims|1 year ago
wg0|1 year ago
ryukoposting|1 year ago
m11a|1 year ago
Is this idea (‘collapse of their generation distributions’) a researched topic? If so, under what name?
Sounds interesting and maybe related to the whole continual learning / how to finetune properly line of work
nitizaz|1 year ago
codedokode|1 year ago
mazoza|1 year ago
hertz-vae: a 1.8 billion parameter transformer decoder which acts as a learned prior for the audio VAE. The model uses a context of 8192 sampled latent representations (17 minutes) and predicts the next encoded audio frame as a mixture of gaussians. 15 bits of quantized information from the next token act as semantic scaffolding to steer the generation in a streamable manner.
programjames|1 year ago
1. `codec`: First, compress 16k samplerate audio into 8 samples per second with convolutions. Then, vector quantize to 128 bits (probably 8 floats) to get a codec. This is not nearly enough bits to actually represent the audio, it's more to represent phenomes.
2. `vae` -> This looks like a VAE-based diffusion model, that uses the codec as its prompt.
3. `dev` -> This is a next-codec prediction model.
Put together, it probably runs like so:
1. Turn your prompt into tokens with the `codec`.
2. If you want s more seconds of audio, use `dev` to predict 8 * s more tokens.
3. Turn it back into audio with the `vae` diffusion model.
unknown|1 year ago
[deleted]
mnk47|1 year ago
zachthewf|1 year ago
Even the large open source TTS models (see F5 TTS, Mask GCT) are mostly trained on very small audio datasets (say 100k hours) relative to the amount of audio available on the internet, so it's cool to see an open source effort to scale up training significantly.
briansm|1 year ago
https://ieeexplore.ieee.org/document/5680311
lordofgibbons|1 year ago
Tepix|1 year ago
xarope|1 year ago
And is the interactive generation just doing an ELIZA? i.e. "P: tell us about how AI will be interesting", "A: Yeah AI will, yeah, be interesting".
kunley|1 year ago
ttul|1 year ago
Jayakumark|1 year ago
nitizaz|1 year ago
awinter-py|1 year ago
spuz|1 year ago
Dawny33|1 year ago
Does Hertz support multi-lingual audio right now?
nicholas-cc|1 year ago
wwwlouishinofun|1 year ago
timnetworks|1 year ago
ryukoposting|1 year ago
With SD and LLMs, there's a lot you can do to debug it by studying the way it responds to small changes in the prompt. But, since Hertz-dev is using sound as its input, it would be hard to discern which token you should tweak. Of course, if it's meant to be used in real time, that kind of fiddling isn't an option at all. How would you go about systematically studying Hertz-dev's behavior?
blixt|1 year ago
imjonse|1 year ago