When potential employers recruit new talent, they often use many means to evaluate candidates: prior work experience, grades and educational background, college attendance, writing samples, and the like. Recently, a company is offering a different tool for assessing and recruiting talent: voice-evaluation.
In a recent story on Fast Company, the Voice Analyzer (TM) tool was highlighted as a new means to recruit and evaluate talent based on the tone of one's voice and the potential emotions it might evoke in potential customers. As explained in the article, Jobaline has developed an algorithm which is used to "assess paralinguistic elements of speech, such as tone and inflection, and predict which emotions a specific voice will elicit--excitement, for instance, or calmness." A recent story on NPR's All Things Considered, Now Algorithms Are Deciding Whom to Hire, Based on Voice, features Jobaline CEO's Luis Salazar explaining the process.
Such algorithms are based on a large amount of data about voice qualities and the emotions they invoke. One particularly interesting quote, however, that caught my attention from the Fast Company article: "There are so many sources of bias when you're dealing with humans...The beauty of math is that it's blind. It helps give everybody a fair chance." This quote raises an interesting question about data analysis--does the fact that an algorithm is based on reams of data mean it is unbiased? It is the case that the same algorithm can be applied in a consistent fashion over and over so its application and the factors it is assessing is the same for every voice analyzed. But, that is not the same as saying any particular algorithm (and I am not offering any opinion about the Jobaline Voice Analyzer specifically) cannot contains any underlying bias in what it predicts. Data is not necessarily a panacea for all inherent biases.