Nasa Image

What's Next for AI? Some Challenges AI Must Overcome

Max Smith
by Max Smith
April 16, 2020
3 min read

Much like a calculator is always going to be better than a person at basic arithmetic, there is no doubt that a program, bot, machine, etc. leveraging a well-developed machine learning algorithm is going to perform its function better than a human. So why isnโ€™t machine learning being used everywhere? It turns out there are a myriad of social and technical considerations.

Social Challenges

The "Black Box"

If a data-driven, better performing tool is the benefit of machine-learning, then the complex and often inexplicable nature of the underlying formula is arguably the biggest downside (at least socially). People sometimes refer to it as a black box, because the formula doesnโ€™t print out a human-readable explanation with the decision it makes. In some real sense, the formula is itself the description of why the decision was made. For a doctor, he/she can say โ€œI made x diagnosis because of symptoms x,y,z and because you lacked symptoms a,b which would rule out other illnessesโ€. A human-readable version of the machine learning output would be โ€œI made x diagnosis because of the weighting and correlation between (insert all factors here)โ€ every time.

People need explanations. They want to understand how it works, conceptually at the very least. Especially when it comes to topics that they deem to be important. If theyโ€™re going to trust a machine to make a medical diagnosis, or their car to drive itself, or a program to invest their money, it requires trust that the correct decision is going to be made. The burden of proof is exponentially higher than for other people. More than 3,000 people die in car crashes every day and itโ€™s deemed normal. But if one self driving car hits a pedestrian, regardless of whoโ€™s at fault, there is an outcry that the machines canโ€™t be trusted. Skepticism is healthy, but if people donโ€™t trust it, companies have a hard time investing in it.

Legality

If a doctor uses a program to help make a diagnosis and the diagnosis is wrong, whoโ€™s at fault? The doctor for trusting the diagnosis? The hospital for buying the software? The programmer of the software for making a bad diagnostic model? The providers of the diagnostic data that led to a faulty diagnostic model?

What happens if you apply ML to insurance applications and the algorithm routinely declines policies to a specific sex, minority, age group? Is the company culpable for violating equality laws or can they blame it on the program?

โ€œWhoโ€™s at faultโ€ in the event something goes wrong could have huge financial implications for ML service providers. Although the pros are starting to outweigh the cons, there is hesitation for corporations to invest in the areas where ML could provide the most value.

Technical

Training

The reality is for problems that you would want to solve with machine learning, the data sets to sufficiently train the algorithm simply doesnโ€™t exist. Either they arenโ€™t big enough, they are missing key factors that you expect to be relevant, they are missing key factors that you donโ€™t expect to be relevant (think about yet undiscovered correlations like banana consumption leading to cancer), the data has quality issues such as standardization, or the data is restricted (think about data privacy trying to get the diagnosis data). This leads to a need to develop the training data set locally (at one specific hospital) over time, which can take long periods of time (think years) to develop something statistically significant.

Limited Data Sets

Sadly, machine learning cannot solve every problem. It suffers from a lack of creativity. If you want to invent something new, create new art, record new music, etc., machine learning cannot help. It relies on the existence of historical data (read a lot of historical data), and so expressive tasks arenโ€™t really a good fit.

As my dad always said, donโ€™t use a wrench to bang in a nail.

Max Smith
Max Smith
Product Marketing Manager

Insights to inbox - Monthly newsletter

We've received your submission
Please fill in all the fields

By submitting this form, you confirm that you agree to the storing and processing of your personal data by Celonis as described in our Privacy Policy
Dear visitor, you're using an outdated browser. Parts of this website will not work correctly. For a better experience, update or change your browser.