At the beginning of this month, Twitter finally rolled out its updated, full image display format for visuals, ensuring users get the full context in image previews, as opposed to the earlier automatically cropped version.
Twitter full image display comparison is widely welcomed by users who think it’s a good update, in spite of the fact that it does shift your tweet timelines a little.
The interesting thing about this update was the logic provided by Twitter.
Twitter introduced this update as a result of its efforts to address algorithmic bias, and not to provide an improved user experience. It was in October last year when it began an investigation into the incident of users sharing examples of how Twitter’s image cropping algorithm favored white people over black people in attached images.
Twitter image experiment
Twitter launched its internal investigation as a result of these example tweets, which subsequently led to a broader analysis of its visual algorithms, and their usefulness for this task.
Surprisingly, it indeed found problems with its algorithms. Twitter revealed their earlier image cropping process used a ‘saliency algorithm’ to determine image cropping.
The saliency algorithm works on the theory that by guessing what a person might want to look first within a picture so that the system could accordingly fix how to crop an image to an easily viewable size. Saliency models are coached on how the human eye looks at a picture as a method of prioritizing what’s likely to be the most important element to most people. The algorithm, based on human eye-tracking data, predicts a saliency score on every part of the image and selects the point with the highest score as the center of the crop.
After Twitter received many complaints on account of users’ apprehension of potential bias within this process, they conducted testing, which proved its saliency algorithm actually had a stronger preference for fair-skinned people in images over black people, and while it would also, sometimes, choose “a woman’s breast line or legs as a salient feature” to determine the key elements of focus in pictures.
Unfortunately, both of these features obviously had problems that finally led Twitter to update its image cropping process, which now has moved to full image display in the mobile app. However, Twitter’s investigators found the issues rather marginal.
They concluded that in the comparisons of black and white individuals, there was indeed a 4% difference from demographic parity in favor of white individuals. Moreover, for every 100 images of women, about three of them cropped at the body part other than the head. And when images weren’t cropped at the head, they were done to non-physical aspects of the image, such as a number on the sweatshirt, or the slogan. Yet, it decided to take action because they share users’ concern that any level of bias on these fronts is not good.
This also helped Twitter to an interesting discovery:
“We have realized that not everything on Twitter is a good example for an algorithm, and in this particular case, the decision about how to crop an image is best left to people.” Algorithms are excellent when it comes to optimizing tasks, and showing people more of what they are actually looking for, but they can also reinforce existing bias and may lead to problematic behaviors.
It was a very bold step on the part of Twitter to accept the inherent problems associated with its algorithm, and earnestness to find the solution. It acknowledged that in some cases, algorithms were not what users needed in the first place. Algorithms are essentially designed to create an amazing user experience, and not the other way round.
Let’s hope other leading channels such as Facebook, Instagram, and LinkedIn, and also new channels like Connected India who are still trying to find acceptance among larger users should learn from Twitter, and put more consideration into the same. Stay updated with technologygrabber.us