Uncategorized

Myspeak.js

meSpeak.loadConfig(“mespeak_config.json”); meSpeak.loadVoice(‘en-us.json’); meSpeak.speak(‘hello world’); meSpeak.speak(‘hello world’, { option1: value1, option2: value2 .. }); meSpeak.speak(‘hello world’, { option1: value1, option2: value2 .. }, myCallback); var id = meSpeak.speak(‘hello world’); meSpeak.stop(id); meSpeak.speak( text [, { option1: value1, option2: value2 .. } [, callback ]] ); text: The string of text to be spoken. The text may contain line-breaks (“\n”) and special characters. Default text-encoding is UTF-8 (see the option “utf16” for other). options (eSpeak command-options): * amplitude: How loud the voice will be (default: 100) * pitch: The voice pitch (default: 50) * speed: The speed at which to talk (words per minute) (default: 175) * voice: Which voice to use (default: last voice loaded or defaultVoice, see below) * wordgap: Additional gap between words in 10 ms units (default: 0) * variant: One of the variants to be found in the eSpeak-directory “~/espeak-data/voices/!v” Variants add some effects to the normally plain voice, e.g. notably a female tone. Valid values are: “f1”, “f2”, “f3”, “f4”, “f5” for female voices “m1”, “m2”, “m3”, “m4”, “m5”, “m6, “m7” for male voices “croak”, “klatt”, “klatt2”, “klatt3”, “whisper”, “whisperf” for other effects. (Using eSpeak, these would be appended to the “-v” option by “+” and the value.) Note: Try “f2” or “f5” for a female voice. * linebreak: (Number) Line-break length, default value: 0. * capitals: (Number) Indicate words which begin with capital letters. 1: Use a click sound to indicate when a word starts with a capital letter, or double click if word is all capitals. 2: Speak the word “capital” before a word which begins with a capital letter. Other values: Increases the pitch for words which begin with a capital letter. The greater the value, the greater the increase in pitch. (eg.: 20) * punct: (Boolean or String) Speaks the names of punctuation characters when they are encountered in the text. If a string of characters is supplied, then only those listed punctuation characters are spoken, eg. { “punct”: “.,;?” }. * nostop: (Boolean) Removes the end-of-sentence pause which normally occurs at the end of the text. * utf16: (Boolean) Indicates that the input is UTF-16, default: UTF-8. * ssml: (Boolean) Indicates that the text contains SSML (Speech Synthesis Markup Language) tags or other XML tags. (A small set of HTML is supported too.) further options (meSpeak.js specific): * volume: Volume relative to the global volume (number, 0..1, default: 1) Note: the relative volume has no effect on the export using option ‘rawdata’. * rawdata: Do not play, return data only. The type of the returned data is derived from the value (case-insensitive) of ‘rawdata’: – ‘base64‘: returns a base64-encoded string. – ‘mime‘: returns a base64-encoded data-url (including the MIME-header). (synonyms: ‘data-url’, ‘data-uri’, ‘dataurl’, ‘datauri’) – ‘array‘: returns a plain Array object with uint 8 bit data. – default (any other value): returns the generated wav-file as an ArrayBuffer (8-bit unsigned). Note: The value of ‘rawdata’ must evaluate to boolean ‘true’ in order to be recognized. * log: (Boolean) Logs the compiled eSpeak-command to the JS-console. callback: An optional callback function to be called after the sound output ended. The callback will be called with a single boolean argument indicating success. If the resulting sound is stopped by meSpeak.stop(), the success-flag will be set to false. Returns: * if called with option rawdata: a stream in the requested format (or null, if the required resources have not loaded yet). * default: a 32bit integer ID greater than 0 (or 0 on failure). The ID may be used to stop this sound by calling meSpeak.stop(<id>). if (meSpeak.isVoiceLoaded(‘de’)) meSpeak.setDefaultVoice(‘de’); // note: the default voice is always the the last voice loaded meSpeak.loadVoice(‘fr.json’, userCallback); // userCallback is an optional callback-handler. The callback will receive two arguments: // * a boolean flag for success // * either the id of the voice, or a reason for errors (‘network error’, ‘data error’, ‘file error’) alert(meSpeak.getDefaultVoice()); // ‘fr’ if (meSpeak.isConfigLoaded()) meSpeak.speak(‘Configuration data has been loaded.’); // note: any calls to speak() will be deferred, if no valid config-data has been loaded yet. meSpeak.setVolume(0.5); meSpeak.setVolume( volume [, id-list] ); Sets a volume level (0 <= v <= 1) * if called with a single argument, the method sets the global playback-volume, any sounds currently playing will be updated immediately with respect to their relative volume (if specified). * if called with more than a single argument, the method will set and adjust the relative volume of the sound(s) with corresponding ID(s). Returns: the volume provided. alert(meSpeak.getVolume()); // 0.5 meSpeak.getVolume( [id] ); Returns a volume level (0 <= v <= 1) * if called without an argument, the method returns the global playback-volume. * if called with an argument, the method will return the relative volume of the sound with the ID corresponding to the first argument. if no sound with a corresponding ID is found, the method will return ‘undefined’. var browserCanPlayWavFiles = meSpeak.canPlay(); // test for compatibility // export speech-data as a stream (no playback): var myUint8Array = meSpeak.speak(‘hello world’, { ‘rawdata’: true }); // typed array var base64String = meSpeak.speak(‘hello world’, { ‘rawdata’: ‘base64’ }); var myDataUrl = meSpeak.speak(‘hello world’, { ‘rawdata’: ‘data-url’ }); var myArray = meSpeak.speak(‘hello world’, { ‘rawdata’: ‘array’ }); // simple array // playing cached streams (any of the export formats): meSpeak.play( stream [, relativeVolume [, callback]] ); var stream1 = meSpeak.speak(‘hello world’, { ‘rawdata’: true }); var stream2 = meSpeak.speak(‘hello again’, { ‘rawdata’: true }); var stream3 = meSpeak.speak(‘hello yet again’, { ‘rawdata’: ‘data-url’ }); meSpeak.play(stream1); // using global volume meSpeak.play(stream2, 0.75); // 75% of global volume meSpeak.play(stream3); // v.1.4.2: play data-urls or base64-encoded var id = meSpeak.play(stream1); meSpeak.stop(id); Arguments: stream: A stream in any of the formats returned by meSpeak.play() with the “rawdata”-option. volume: (optional) Volume relative to the global volume (number, 0..1, default: 1) callback: (optional) A callback function to be called after the sound output ended. The callback will be called with a single boolean argument indicating success. If the sound is stopped by meSpeak.stop(), the success-flag will be set to false. (See also: meSpeak.speak().) Returns: A 32bit integer ID greater than 0 (or 0 on failure). The ID may be used to stop this sound by calling meSpeak.stop(<id>). meSpeak.stop( [<id-list>] ); Stops the sound(s) specified by the id-list. If called without an argument, all sounds currently playing, processed, or queued are stopped. Any callback(s) associated to the sound(s) will return false as the success-flag. Arguments: id-list: Any number of IDs returned by a call to meSpeak.speak() or meSpeak.play(). Returns: The number (integer) of sounds actually stopped.

Note on export formats, ArrayBuffer (typed array, defaul) vs. simple array:
The ArrayBuffer (8-bit unsigned) provides a stream ready to be played by the Web Audio API (as a value for a BufferSourceNode), while the plain array (JavaScript Array object) may be best for export (e.g. sending the data to Flash via Falsh’s ExternalInterface). The default raw format (ArrayBuffer) is the preferred format for caching streams to be played later by meSpeak by calling meSpeak.play(), since it provides the least overhead in processing.

meSpeak.speakMultipart() — concatenating multiple voices

Using meSpeak.speakMultipart() you may mix multiple parts into a single utterance.

See the Multipart-Example for a demo.

The general form of meSpeak.speakMultipart() is analogous to meSpeak.speak(), but with an array of objects (the parts to be spoken) as the first argument (rather than a single text):

meSpeak.speakMultipart( <parts-array> [, <options-object> [, <callback-function> ]] ); meSpeak.speakMultipart( [ { text: “text-1”, <other options> ] }, { text: “text-2”, <other options> ] }, … { text: “text-n”, <other options> ] }, ], { option1: value1, option2: value2 .. }, callback );

Only the the first argument is mandatory, any further arguments are optional.
The parts-array must contain a single element (of type object) at least.
For any other options refer to meSpeak.speak(). Any options supplied as the second argument will be used as defaults for the individual parts. (Same options provided with the individual parts will override these defaults.)
The method returns — like meSpeak.speak() — either an ID, or, if called with the "rawdata" option (in the general options / second argument), a stream-buffer representing the generated wav-file.

Note on iOS Limitations

iOS (currently supported only using Safari) provides a single audio-slot, playing only one sound at a time.
Thus, any concurrent calls to meSpeak.speak() or meSpeak.play() will stop any other sound playing.
Further, iOS reserves volume control to the user exclusively. Any attempt to change the volume by a script will remain without effect.
Please note that you still need a user-interaction at the very beginning of the chain of events in order to have a sound played by iOS.

Note on Options

The first set of options listed above corresponds directly to options of the espeak command. For details see the eSpeak command documentation.
The meSpeak.js-options and their espeak-counterparts are (mespeak.speak() accepts both sets, but prefers the long form):

meSpeak.js eSpeak
amplitude -a
wordgap -g
pitch -p
speed -s
voice -v
variant -v<voice>+<variant>
utf16 -b 4 (default: -b 1)
linebreak -l
capitals -k
nostop -z
ssml -m
punct –punct[=”<characters>”]

Voices Currently Available

  • ca (Catalan)
  • cs (Czech)
  • de (German)
  • el (Greek)
  • en/en (English)
  • en/en-n (English, regional)
  • en/en-rp (English, regional)
  • en/en-sc (English, Scottish)
  • en/en-us (English, US)
  • en/en-wm (English, regional)
  • eo (Esperanto)
  • es (Spanish)
  • es-la (Spanish, Latin America)
  • fi (Finnish)
  • fr (French)
  • hu (Hungarian)
  • it (Italian)
  • kn (Kannada)
  • la (Latin)
  • lv (Latvian)
  • nl (Dutch)
  • pl (Polish)
  • pt (Portuguese, Brazil)
  • pt-pt (Portuguese, European)
  • ro (Romanian)
  • sk (Slovak)
  • sv (Swedish)
  • tr (Turkish)
  • zh (Mandarin Chinese, Pinyin)*
  • zh-yue (Cantonese Chinese, Provisional)**

Overcoming Problems and Concerns

My biggest concern throughout this project is that people just “wouldn’t get it”. I really wanted to avoid having to spell it out to them but this is essentially what I felt needed to happen in order for it to not be a total flop. I had wanted the user to engage with the content and feel intrigued and questionable, I feel that by giving it away it sort of lost its meaning, but then I really did not want the point to be lost. As part of my user testing it was obvious that my testing participants needing an encouraging nudge in order to realise certain parts of why the videos were relevant and why it was the way it was. The continued struggle with narrative definitely played a part in this and I wish now that I had stopped trying to force it and made more of an interactive collage type front end design, where the layers overlapped and interacted and just spoke louder volumes. I am really quite disappointed that the linear click through narrative didn’t work out, however I feel this change was for the better I just wish I’d have had more time to execute it properly!! I feel I that my biggest downfall was not moving on more quickly when something wasn’t quite working, and holding out for  solution for far too long. This is where doing too much research can often be a problem because you get to hug up on it and too hung up on meeting the brief. I should have gone with my artistic instinct from the beginning.