This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 778450. Project coordinator: Verbio Technologies, Spain.

Pay-Me-Attention project

By 2020, 85% of interactions with clients will not have any human intervention. The ability to interact with any type of devices in multiple situations allows us to have multiple use cases in different environments: mobile chatbots, autonomous cars, shopping assistants, web assistants, wearables, robots, home automation... and anything related to IoT and human-machine interaction that the near unimaginable future will demand.

Our solution

Verbio Pay-me-attention is the safest and simplest way of integrating security and reliability in voice recognition applications, it is scalable and easy to embed, achieves accuracies <1%, while cutting 90% recurrent
costs of similar applications.

Totally integrated into the conversation. Verbio’s Pay-me-attention systems can be integrated into the natural flow of the conversation, allowing the verification and transcription to be completely transparent for the user.

  • In real time. The integration of the Voice Pay-me-attention significantly reduces interaction times and, consequently, brings down costs and increasing the user’s satisfaction rate.

  • Perfect fit. We custom-fit the acoustic models (car, room, street…) and are able to combine it with the continued speech recognition algorithms (able to discriminate among different users) achieving an optimal performance.

  • Flexibility of use. Verbio’s Pay-me-attention works with the normal flow of the conversation (8 sentences < 30 seconds of voice parametrization), enabling a transparent training and verification.

  • Omni-channel voiceprint: Verbio’s Pay-me-attention solution allows creating a unique voiceprint for any communication channel in the omnichannel paradigm (IVR, Mobile, web, on-site).

  • Anti-spoofing/anti-repeat protection algorithms: Core technology includes mechanisms to protect verifications processes against reproduction and repetition attacks, as well as audio concatenation, allowing to define who is speaking and which.

    Key Features:

    • Two-fold: Detects where the speaker is present + verifies it with the biometric system
    • Initial speaker spotting: hypothetic speaker voiceprint vs. garbage/world model
    • Verification is produced using only speech aligned with the voiceprint model
    • Results include: biometric score and segmentation from speaker spotting

Know more about Verbio

Pay-Me-Attention project
Share this