There’s a Glaring Mistake in the Way AI Looks at the World

November 3, 2017

(Quartz) – But researchers have found that the patterns AI looks for in images can be reverse-engineered and exploited, by using what they call an “adversarial example.” By changing an image of a school bus just 3%, one Google team was able to fool AI into seeing an ostrich. The implications of this attack means any automated computer vision system, whether it be facial recognition, self-driving cars, or even airport security, can be tricked into “seeing” something that’s not actually there.

Recommended Reading