Speech Synthesis: Web Speech API, part one

Warning This article was written over six months ago, and may contain outdated information.

It can’t have escaped your notice that iOS7 was recently released, and with it a new version of Safari. Among many additions and changes to its standards support comes (partial) implementation of the new Web Speech API. This API has two core features: speech recognition, which uses a web service to transcribe voice input; and speech synthesis, which uses system libraries to output an artificial voice. Safari for iOS7 brings support for the latter, so I’m going to briefly explain through how that works.

At it’s simplest, you pass the words you want to say into a new SpeechSynthesisUtterance object, then use the speak method of the speechSynthesis interface to say it. That’s easier than I’ve made it sound, as you can see in this code example:

var foo = new SpeechSynthesisUtterance('Hello world!');
window.speechSynthesis.speak(foo);

You can modify the SpeechSynthesisUtterance object through a series of attributes; the behaviour of volume, rate and pitch should be fairly obvious.

foo.volume = 0.5; // 0 to 1
foo.rate = 1.5; // 0.1 to 10
foo.pitch = 2; //0 to 2

You can change the voice through the lang attribute; the default language is taken from the language of the document, but a different four-value ISO country code on lang will supply a different voice:

foo.lang = 'en-GB';

I’ve built a pretty rough and ready demo page so you can play around with these values; you’ll need to open it in Safari for iOS7, Safari 6.1 or 7 for OSX, or Chrome Canary (for OSX; untested on other platforms).

Demo: Speech Synthesis API

There’s also a voiceURI attribute, implemented in Safari as voice (and likely to change in the spec too), which allows you to specify a different variation on the current voice, but I can’t get this to work. You can see all available voices by calling the getVoices method on speechSynthesis:

var bar = window.speechSynthesis.getVoices();

This returns an array of system voices, from which you can get the value of the voiceURI attribute and pass it into voice – although, as I said, this seems to have no effect in any of the tests I’ve run.

As I mentioned at the start of this piece the Web Speech API has another side, which is speech recognition. I’ll talk about that in a subsequent article in the near future.

Update: From a little further experimentation it looks like on iOS the speech must be initiated by a user action, rather than on page load (for example). Also, I’ve added a little easter egg to my site: if you’re using Chrome Canary or Safari 6.1/7 (both for OSX), try using the site search box…

11 comments on
“Speech Synthesis: Web Speech API, part one”

  1. […] Speech Synthesis: Web Speech API, part one – Broken Links Check out the demo in Safari in iOS7 or Safari in OSX. […]

  2. Also works in Chrome Canary btw :-)

  3. […] iOS7 Safari now supports the web speech API and Peter Gasston shows you how you can use it. Speech Synthesis: Web Speech API, part one […]

  4. […] in Chrome (as of stable v30) it appears that only speech recognition is implemented, and a recent blog has noted that Safari 6.1, 7 and Safari for iOS7 supports speech synthesis. Although it can […]

  5. Speech Synthesis is no longer working for me in Chrome v31 on Mac OS 10.7.5 (or Safari)… not sure when it stopped working but it was fine earlier in 2013. Tried it on a couple of machines.

    Is it currently working for you (Nov 26th 2013) ?

    I’m not seeing any status changes on the Chrome mailing lists…

  6. @Rob Jones: it did stop working for me briefly, but I put in a workaround and it seems fine for me now – in Chrome 32, at least.

  7. […] to their sites. Google added recognition to the Chrome last year and today it adds built-in speech synthesis, too. Using this API, developers can get a list of supported voices from a given machine and then […]

  8. […] このベータのそのほかの新機能としては、Web Speech APIのサポートがある。これによりデベロッパは、自分が作るサイトに音声認識や音声合成機能を加えられる。音声認識の方はChromeに昨年加わったが、今日は音声合成機能が内蔵された。このAPIを使うためには、デベロッパがマシン上でサポートされている音声の中から指定して、音声合成エンジンがその音声で喋れるようにする(下図)。なおiOS 7のSafariも、部分的に音声合成をサポートしている。 […]

  9. I made a demo where you can choose a voice among supported voices. Existing demo doesn’t seem to allow to see the list of voices.

    Native OSX voice are fun to play with :)

    http://jsfiddle.net/NicuPrinFum/Fb8WG/embedded/result/

  10. […] Speech API ist nun um speech synthesis erweitert worden. Eine Demo gibt es […]

  11. […] Speech API ist nun um speech synthesis erweitert worden. Eine Demo gibt […]