Natural Stupidity Versus Artificial Intelligence

Techsense Team I 6:42 pm, 8th September

Machines that simulate human intelligence and mimic human actions are transforming industries. Artificial intelligence (AI) is different from other revolutionary technologies in that it stretches beyond the technology and engineering domains to include social sciences, behavioral sciences and philosophy. This is a strength, given the numerous AI applications in use today and its immense potential for the future. There’s also a drawback – just as AI imbibes human intelligence, it can also embed human stupidity.


Prejudices in training data


AI uses training data, comprising text, images, audio and/or video, to build a learning model and perform a particular task to a high degree of accuracy. Algorithms learn from this data and behave based on what the data has taught them.


Problem is, training data can contain prejudiced human decisions or reflect social or historical inequities even after variables such as race, gender or sexual orientation are removed. In 2016, Microsoft’s AI-based conversational chatbot for Twitter made news for the wrong reason after it began tweeting racist, misogynistic and anti-Semitic messages. It was the result of Twitter users taking advantage of the bot’s social-learning abilities to teach it to spew racist rants. While Twitter was having fun with the bot, the consequences of introducing biases into training data can have far-reaching consequences.


Choosing bad quality data


Biases apart, inconsistencies in big data can skew outcomes. A plainly stupid decision would be to use a data set that does not reflect a model’s use case. Determining whether the data is representative of the problem, and the impact on outcomes from combining internal and external data, is important. For example, selection bias may creep in when the chosen data is not representative of the future population of cases the model will encounter. There are real-world examples of facial recognition software using datasets containing 70-80% male and white profiles, to the inclusion of other genders, ethnicities and races.


Human intervention in data selection and clean-up is crucial but a lot depends on the reasoning capability of analysts. Subjectivity or an inability to grasp the impact of poor-quality data on the outcomes the AI application will deliver, can quickly diminish its utility and success.


Looking to the future

?

We are in the age of ‘weak AI’, indicating that artificial intelligence in use today has a narrow focus and performs one action, such as driving a car or recognizing faces. ‘Strong AI’ surpassing humans in just about every cognitive task, is expected to arrive in the future. Hopefully, the powerful AI machines in the years to come will be free of the human beliefs, biases, subjectivity and errors in judgment prevalent in current models. 


Subscribe to our Newsletters

Stay up to date with our latest news

more news

load more

There are no any top news
Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept