In a curious 2015 study, a group of 16 pigeons were provided with a touch screen and assigned to examine a series of breast tissue images. All they had to do was decide if the image patterns pointed to the possibility of cancer traces.
After a short training period, the pigeons obtained surprising results: working independently, the pigeons correctly evaluated 85% of the samples. The most surprising thing, however, came later.
If the results of the experiment with the pigeons were already surprising, it was even more so when the responses of the pigeons were grouped, combining the votes of each to make a general evaluation of each image. Then the accuracy rate reached 99%.
What does this all mean? Hannah fry offers some light to these disturbing results in his book Hello World. How to remain human in the era of algorithms:
Let's hurry to say that, for now, the jobs of pathologists are not in danger. I don't even think that the scientists who designed the study were suggesting that doctors could be replaced by vulgar pigeons. But the experiment did demonstrate something important: detecting hidden patterns in groups of cells is not an exclusively human ability. So, if a pigeon can do it, why not an algorithm?
And the algorithms are much better than doctors in this regard. In another 2015 study, for example, 72 breast tissue biopsies were taken, and 115 pathologists were asked for their opinion. The result was that the diagnosis only coincided 48% of the time. As Fry points out, "seeing your diagnosis reduced more or less to a 50% chance is almost equivalent to throwing a coin in the air: if it goes expensive they could end up giving you an unnecessary mastectomy."