Tackling Biases in AI – Applause’s Novel Solution to a Classic Woe


Like human kind, machine learning is prone to biases. But, good news is that these type of biases are identifiable. And, this is exactly what a team of testing specialists from Applause presented recently.  Besides, it does not just help remove biases but also provides data for better training.

About Applause:

It is a massive community of testers that is global in its membership scope. And, the members test apps to provide solutions for a number of companies, including big ones. Some of the known names in its portfolio include Google, PayPal and Uber.

The company is now tapping into AI development.

Applause for AI:

Recently, Kristin Simonini, VP at Applause, explained what the new product means for the AI world.  She explained how companies looked for support in terms of data collection for feeding algorithms, thereby, training systems. Thus, testing functionality and supporting AI development became critical.

About the Solution:

5 types of engagements with AI are part of the new solution – Voice, OCR, Image Recognition, Biometrics and Chatbots. Here, the community plays a crucial role. The sheer size allows the company to cull out relevant data in no time, and that too at scale. Consequently, the breadth and depth of this data is impressive. And, it can be as diverse as needed. That is meaning to say that be location, gender, devices and a million other parameters, all form a successful repository.

In terms of AI, testers are capable of proffering voice utterances, images and even required documents. Thus, this is a major help in working out niche data sets that are significant in removing major AI hurdles.

Considering that AI is set to play a notable role in the future, it is only natural that biases need weeding out. Because, it is part of the overall responsibility of anyone involved in developing futuristic technology.

Leave a Reply

Related Posts