How to Install and Use the AudioStack Python SDK

Get started and generate your first audio πŸš€

This guide is aimed at developers who want to get up and running with AudioStack as quickly as possible. By the end of this Quick Start, you will have generated your first audio using our Python SDK, and know how to change voices, sound templates and mastering presets.

Step 1 - Make sure you're on Python3

python --version

Make sure Python3 is installed - if not you can install via the Python website or using homebrew.

Step 2 - Create and activate a virtualenv for your project

  • To create: python3 -m venv myFirstAudioStackProject
  • To activate (Mac/ Linux): source myFirstAudioStackProject/bin/activate
  • To activate (Windows): myFirstAudioStackProject\Scripts\activate

Step 3 - Install AudioStack Python SDK

pip3 install -U audiostack

Step 4 - Create your Python File


Step 5 - Copy and Paste the Quick Start Code

Copy and paste this example code into your Python file, replacing the API key with your personal API key. You can access your API key from the AudioStack Platform.

import audiostack
import os

# First, paste your API key inside the quotation marks below:

audiostack.api_key = "APIKEY"

# This code example demonstrates functionality from the Content, Speech, Production and Delivery parts of the AudioStack API.
# Using the API, you can create production-ready audio assets in minutes.

# In Content, you can create scripts and manage your production assets.

script = """
<as:section name="main" soundsegment="main">
With AudioStack, you can create compelling, synthetic audio adverts in minutes. Add your copy here to get started. </as:section>

names = ["Cosmo"] # Add names to the list to generate multiple audio files using different voices
presets = ["musicenhanced"]
templates = ["sound_affects"]

print("Creating the script...")
script = audiostack.Content.Script.create(
    scriptText=script, scriptName="test", projectName="mastering_test"

for name in names:
    # In Speech, you can access top quality AI voice models, or use your own cloned voice.
    print(f"Synthesizing speech for {name}")
    speech = audiostack.Speech.TTS.create(scriptItem=script, voice=name, speed=1)
    for template in templates:
        for preset in presets:
            # In Production, you can dynamically mix content, apply sound designs and master your audio so that it sounds as professional as possible.
                f"Mixing the speech with template `{template}` using `{preset}` preset"
            mix = audiostack.Production.Mix.create(
            # Now you can download the wav file, or use the Delivery API to encode it to another format.
            print("Downloading the wav file...")
            file_name = f"V1_{name}_{template}_{preset}"
            current_directory = os.path.dirname(os.path.abspath(__file__))
            print(f"File downloaded to: {current_directory}/{file_name}.wav")

            # In Delivery, you can encode your audio to a variety of formats, and host it on our server!
            delivery = audiostack.Delivery.Encoder.encode_mix(
            print("MP3 file URL:", delivery.url)


Want to try a low cost voice to test out how the API works?

We keep one of our Retro voices available for this purpose - try using "Rollingthunder" to reduce the number of credits you use up for this test.


Keep your API key safe

Remember, your API key is like a password, so you should keep it safe and avoid sharing it with anyone, including in shared or public repositories.


Errors on authentication?

Check your bashrc file or your environment settings to make sure you use the correct API Key!

Step 6 - Run your code




You just generated your first audio asset using AudioStack.

Bonus step - Experiment with different voices, templates and presets

Now you've created your first audio asset, you can experiment with different voices, sound templates and presets to customise your audio. Simply add additional items to the respective list, separated by commas. For example:

names = ["Promo_Ramona", "Cosmo"]  
presets = ["voiceenhanced", "balanced", "musicenhanced"]
templates = ["future_focus"]

What’s Next

Check out some of the use cases that your team can solve using the AudioStack API: