AI Safety FAQ

A discussion of the mechanisms AudioStack enforces to ensure AI Safety

Introduction

At AudioStack, ensuring our systems are developed, deployed, and used safely is our priority. Over the past six years, our AI audio production services have supported a wide range of uses: from consumer facing applications in healthcare and voicing news podcasts, to animating avatars, assisting in sales processes, enabling language localization in entertainment, and creating dynamic voiceovers for social media and advertising. Furthermore such technologies have enabled new business activity to be created. We fundamentally believe this will be transformative however there are risks involved with any new technology, and we consider managing the risks to be important.

What are your protocols for Ethical AI?

Services

  • AudioStack provides the world’s first and only infrastructure for building synthetic audio. Users are able to create audio from text using the best AI voice providers on the market, as well as AudioStack's own models.‍
  • Once speech has been created, it is turned into audio using sound design templates and enhanced to the quality of a professional studio production.‍
  • AudioStack expands the possibilities of audio creation by offering automation and scalability to a process that hasn’t changed significantly in the last 100 years. We believe that voice actors, sound engineers, and physical studios will always exist. In fact, Aflorithmic would not exist, nor work without them. It is not our mission to replace them.‍
  • We are committed to ethical, fair, transparent AI following theΒ UK’sΒ andΒ European Union’s Ethics GuidelinesΒ for Trustworthy Artificial Intelligence and will follow other laws such as the AI Act when this comes into force.

Β Voice Cloning

  • AudioStack will never use the voice of a private person or an actor without their explicit permission as an audio recording.‍
  • For historical figures, the written consent of the estate of the figure is required.‍
  • This does not apply to open-source technology, such as Vocoders (e.g. Stephen Hawking).‍
  • Voice cloning will only begin once Aflorithmic and the company / voice actor have agreed on the Legal terms and privacy details.‍
  • Aflorithmic does not allow self-serve voice cloning without having communicated with the voice actor / individual personally.‍
  • All AI voice providers on AudioStack have been individually vetted and have signed an agreement with AudioStack stating that they own the rights to commercially use the speech models they are giving us access to.‍
  • Access to cloned voices on AudioStack is limited to the organization authorized to do so, secured by rigid authentication methods. As an additional layer of security, the use of AudioStack per organization is secured by an individual API key.

Sound and music

  • Sound design templates or sound effects on AudioStack are built with sound elements AudioStack owns or they are defined as creative commons.‍‍
  • Sound files uploaded by users are subject to AudioStackterms and conditions. Any fraudulent, racist, sexual, or otherwise suspicious content detected will result in suspension of the account it was uploaded from.

Bad Actors

  • AudioStack monitors almost all created audio content daily.‍‍
  • Any fraudulent, racist, sexual, or otherwise suspicious content detected will result in the suspension of the account it was generated from.‍
  • AudioStack does not allow any temporary email addresses to be used and limits signups to one per single IP address.‍
  • All paid accounts come with a dedicated technical account manager to assist and assess unlawful usage on an individual and personal level.

Limitations

  • Any fraudulent, racist, sexual, or otherwise suspicious content detected will result in the suspension of the account it was generated from.‍
  • Any technology can be misused. As a trailblazer for the production, use, and spreading of synthetic audio, it is however our responsibility to limit bad use and educate the public about the state of synthetic audio as much as possible.

What is the Content Authenticity Initiative?

AudioStack is a member of the the Content Authenticity Initiative (CAI), which was first announced by Adobe in 2019. We are now a group of hundreds of creators, technologists, journalists, activists, and leaders who seek to address misinformation and content authenticity at scale.

We are focused on cross-industry participation, with an open, extensible approach for providing media transparency that allows for better evaluation of content provenance.

This group collaborates with a wide set of representatives from software, publishing, and social media companies, human rights organizations, photojournalism, and academic researchers to develop content attribution standards and tools.

https://contentauthenticity.org/ is the URL for more information