Keeping People in the Loop at All Stages of Machine Learning
Written By: Darren Redfern, CTO, skritswap
Machine Learning (a sub-branch of the field of Artificial Intelligence) is making our lives easier, more productive, and secure. But many people misunderstand:
- how these types of technologies work,
- what computer algorithms can do, and
- what they should or shouldn’t do.
In this post, I’ll focus on one key factor: the interaction, or cooperation, between human and machine that goes into creating machine intelligences.
Many people think of Machine Learning (ML) as computers retrieving lots of raw data and then magically becoming smart. Some types of ML do use techniques not supervised by humans. But most ML technologies are built with humans providing the initial training data–inputs with their correct outputs. These models can then take in new data and determine the correct outputs.
But that’s not the whole story. The best systems have continuing interactions between humans and computers. These systems are called human-in-the-loop AIs. As a ML model or algorithm improves at predicting the correct outputs, there is less human supervision. The algorithms can mostly run on their own with some tweaking as data accumulates. But the value of active human-in-the-loop participation continues. These systems improve specifically because of the ongoing merger of the two types of intelligence.
One way to look at this is the size of the data set. If there is little training data readily available, people are essential for both creating or selecting inputs and generating the correctly matching outputs. Without people generating that data, there’d be no Machine Learning.
Even after the data set for a model has grown exponentially, input from people is still invaluable in:
- correcting machine errors, and
- handling edge cases (those exceptions to a rule, or situations that need special handling.)
- keeping the algorithms reliable.
Another contribution of human intelligence is discernment: the human faculties of perception, insight, intuition, and judgement. It is the ability to look at something and know that it is outside of what’s been seen before. People can see when a system needs to be altered to handle new types of situations that might not have been seen or thought of before. An algorithm itself might determine where new situations could occur, but human intelligence is optimal to decide how to handle these anomalies.
When you think of human-in-the-loop systems, remember that some interactions are explicit, and others are implicit:
- Explicit interactions include things like building or curating initial training datasets, reviewing machine outputs, making corrections, and running complex tests on performance.
- Implicit interactions happen when the system learns from the choices users make in their normal interactions with it. The user is unaware that they are providing feedback to improve the system. This develops from the design of the interface or user experience of a product.
For example, a well-designed search engine would track which result links were followed most commonly by users for identical or similar search requests and adjust its future result lists accordingly.
The best ML systems take advantage of both types of interactions. That is what we are doing at skritswap. We think the best approach is to leverage the two intelligences:
- Use computers for what they excel at: speed and accuracy;
- Involve people for their unique aptitudes and ability to make decisions with confidence when data is scarce or inconclusive.
And that is how skritswap is using AI to maximize people power.