Carnegie Mellon University researchers have come up with a method that will automatically identify memes and will apply pre-written templates to add descriptive text making them intelligible via existing assistive technologies.
The six researchers came up with a system to classify and translate memes with up to 92 per cent accuracy. Visually impaired people generally use screen readers and built-in accessibility features to help them navigate around the internet.
According to the study, social media platforms like Facebook, Instagram, and Twitter allow users to add alternative texts to their images but most users are not aware of the feature and simply do not use it. This has resulted in just 0.1 per cent of images being accessible for visually impaired people. Cole Gleason, a Ph.D. student at CMU’s Human-Computer Interaction Institute said that basic computer vision techniques make it possible to describe the images underlying each meme.
Optical character recognition techniques are used to decrypt the overlaid text which can modify utterance of a meme, according to the reports. For each meme, it is necessary to make one template describing the image and the overlaid text can be added for each occurrence of the meme. Gleason said that it completely depends on the memes and added that some of the visuals are more nuanced. The researchers kept in mind to look into the technology and AI to make memes more enhanced as there are multiple Instagram accounts dedicated to creating memes for visually impaired people.
Source: Carnegie Mellon University School of Computer Science