Two groups of scientists in Spain have created a prototype computer system that is able to recognize the emotional state of a person from the intonations in his/her voice.
A significant step forward from the run-of-the-mill automated telephone services (you know, the ones that ask "you said *insert word here* is that correct?" after every phrase), this is set to be slightly less annoying, thanks to the Universidad Calos III de Madrid and Universidad de Granada in Spain's research. The system analyses 60 acoustic parameters of the voice including tone, duration, pauses, energy behind the voice and the speed of talking.
With this, the system can look for negative emotions in intonation, indicating anger, boredom or doubt and alter the system's responses to your communication accordingly to alleviate the stress/anger/distress. Also understanding the contextual complexities as the conversation progresses, it won't repeatedly ask what you're saying, instead opting to predict the direction of your interaction with it, and preparing actions based upon it.
After identifying a person's intentions and mood relevant to it, the system could adapt it's retort dialogue to the situation: offering more help when doubt is found, or becoming more light-hearted when user voice is seen as bored or angry.
The prototype was tested, and it was found to be successful in the short conversations; but we won't be seeing anything like this for a while yet. And we hope that further down the road this will be applied to something more than your average automated call centre: maybe identifying pain in communication challenged hospital patients? Or a more accurate form of lie detector? The opportunities are pretty wide-ranging, and quite a way off.
Source: Alpha Galileo Foundation