logo
logo
Sign in

CEO of Cyanite Markus Schwarzer, AITech Interview

avatar
martechcube
CEO of Cyanite Markus Schwarzer, AITech Interview

With AI technology transforming various industries, the music industry is no exception. Markus and his team at Cyanite have been working on a revolutionary AI-powered music transformer model that promises to disrupt the way we listen to and search for music and deliver the right music content regardless of the use case. In this interview, we dive deep into the fascinating world of AI in music, discussing everything from the technology behind Cyanite’s music transformer model to the impact it will have on the music industry.

Can you describe the technical architecture of Cyanite’s AI-powered music analysis platform?

The core of our tech is artificial intelligence which is based on transformer models. You will know these models from other products like Midjourney, ChatGPT, or Dall-E. We had an initial training process where our AI was taught any given feature that can be a denominator for music and sound. From this layer, we derive all our insights into music. It’s highly flexible and can be tailored to a customer’s specific language or use case. We provide this service via an API, our own web app, or the AWS marketplace.

How do you ensure the quality and accuracy of the music metadata generated by your platform?

We have a meticulous quality assurance mechanism where we run the analysis results of new machine learning models through a variety of automated and manual testing including a qualitative survey of music supervisors.

Can you describe how your platform uses machine learning to analyze music and detect attributes such as mood, tempo, and instrumentation?

We mostly use technology from image recognition and annotation to retrieve information from audio. So, in the first step, we transform the audio into its visual representation, called a spectrogram. These spectrograms show information about the pitch and volume over the time progression of the song. In the training process, the AI recognizes and memorizes specific features and patterns in sequences of pitch and volume which are typical e.g. for certain moods. Whenever it sees a new track and recognizes the same features and patterns it saw in the training data, it will give a prediction of this specific mood being evoked by this song.

How do you handle the privacy and security concerns of your customers when processing their music data?

Every user has their own database which is ringfenced so that only they have access to it. For our customers who require a higher level of security, for example when an international top 100 band is releasing new music and we get the audio pre-release, we can obfuscate and encrypt the data so that it is still recognizable for the AI but inaudible and thus useless otherwise.

Read More @ https://ai-techpark.com/aitech-interview-with-markus-schwarzer/

collect
0
avatar
martechcube
guide
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more