To play audio using the Web Audio API, we need to get an ArrayBuffer of audio data and pass it to a BufferSource for playback.
To get an audio buffer of the sound to play, you need to use the AudioContext.decodeAudioData
method like so:
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
// Fetch the MP3 file from the server
fetch("sound/track.mp3")
// Return the data as an ArrayBuffer
.then(response => response.arrayBuffer())
// Decode the audio data
.then(buffer => audioCtx.decodeAudioData(buffer))
.then(decodedData => {
// ...
});
When the final promise resolves, you'll be given the audio in the form of an AudioBuffer
. This can then be attached to an AudioBufferSourceNode
- and played - like so:
const source = context.createBufferSource();
source.buffer = decodedData; // <==
source.start(...);
Where .start()
has three parameters that offset when to play the sample, offset where in the sample to play from, and tell how long to play the sample, respectively.
More information about how to manipulate the buffer source can be found on MDN.