The company’s experts came to this conclusion based on internal research, but due to the complexity of the topic, they do not yet know why all this happened.
As more and more questions are asked about the use of AI, the large IT companies that also use these technologies are gradually coming under pressure to get a clearer picture of what exactly the algorithm does and why it does it.
The service providers themselves try to find answers to related problems and questions. One such attempt could be to have a separate section on Twit that examines the ethical issues, transparency, and accountability related to MI. A research team called META (ML Ethics, Transparency, and Accountability) recently conducted a very interesting study (PDF) published.
pulls to the right
The algorithms responsible for distributing Twitter content often include the statements of politicians and right-wing media more often than the messages of colleagues representing the other side of the political spectrum, the microblogging service provider conducted its own research.
Experts tracked millions of tweets from politicians in seven countries (USA, Japan, UK, Canada, Germany, Spain and France). Between April and mid-August last year, these users announced thousands of messages around the world as the microblogging provider’s algorithms decided how often to include in users’ news feeds defined by interests.
It turned out that in all the countries studied, the messages of right-wing politicians in all the countries studied were more “pushed” by military intelligence. The performance of the two major political trends showed the most significant differences between Canada and Britain. For example, Labor tweets received an average increase of 112 percent, while Conservative messages increased by 176 percent on virtual walls thanks to the MI decision. In Canada, left-liberals had only 43 percent of the extra impression, while there Conservative Party members measured the algorithm-provided turbo at 167 percent.
Obviously all of this is important to politicians, as users can read, comment on, and pass messages that appear in their personal feeds just because MI is turned on.
They don’t understand
The discrepancies shown by the metrics themselves provide an interesting topic, as in the United States, for example, right-wing politicians regularly accuse community platforms of censorship and deliberate suppression of conservative views.
But perhaps even more interesting than resolving this contradiction is why AI is doing all this. The answer, however, is not at all simple. The experts involved in the research said that personalizing the entries is a very complex process. On the one hand, not one, but several different algorithms run together in the background. On the other hand, this has to be reconciled with the extremely rich (data) virtual footprint of many, many millions of users individually, i.e. reviewing each variable in a formula is an almost impossible task.
It seems certain that the difference in aid is not really affected by whether a particular party is in government or in opposition. It is also clear that messages from different politicians receive different levels of support from MI within each party. Or, it is notable that this turbo effect tends to be reversed in the same way when examining politically correlated media surface materials. (Fox News, for example, has benefited more from running the algorithm than BuzzFeed.)
Since the Twitter team has not been able to uncover the deeper causes within its jurisdiction, they are now working to share the dataset that will serve as the raw material for the research with experts around the world. However, this first requires finding a way to keep sensitive user data secure while not degrading the quality of the sea of information significantly, i.e. remaining searchable.