Stanford AI researchers have created a tool to put algorithmic auditing in the hands of affected communities

Stanford AI researchers have created a tool to put algorithmic auditing in the hands of affected communities

Even with the most recent advances in technology, AI is not yet able to produce completely unbiased or 100% ethically correct conclusions. When presented with biased search results via social media posts or auto-generated hiring and credit picks, a regular man can’t take much action. The best people can do is express their outrage by boycotting the platform or reporting the incident in the hope that those responsible for the algorithm will make the necessary corrections. However, this often goes in vain. On the other hand, journalists and researchers have many technical means at their disposal to make the necessary adjustments. They are able to analyze the algorithmic system with the aim of identifying inputs that lead to biased results. Such algorithmic audits can help affected communities hold accountable those who use negative algorithms.

To investigate a deeper examination of the impacts of algorithms, researchers and professors at Stanford’s Human-Centered Artificial Intelligence (HAI) Lab collaborated with the University of Pennsylvania to conduct a study that puts the tools of algorithmic auditing in the hands of regular users. people, especially from affected communities. The team developed IndieLabel, a web-based tool that allows end users to audit the Perspective API, as a proof of concept for their collaborative study. The primary goal was to determine whether ordinary people could reveal general systematic statements about what a system was doing incorrectly, and to identify other issues of bias that had not previously been identified.

The research team decided to test their strategy by focusing only on the Perspective API in a content moderation environment. The Perspective API is a popular content moderation technique that indicates how toxic text is. Several reputable content providers, including The New York Times and El País, frequently use the Perspective API to flag specific content for manual inspection, label it as harmful, or automatically reject it. Additionally, because the Perspective API has already undergone a technical expert audit, it provides a basis for comparing how end-user auditors may approach the auditing process differently from specialists.

Discover Hailo-8™: an artificial intelligence processor that uses computer vision for multi-camera and multi-person re-identification (sponsored)

For a comprehensive dataset, IndieLabel simulates end-user listener perceptions of content toxicity and allows users to drill down to observe how the Perspective API differs from the listener. This is where IndieLabel makes a significant distinction because typically the model is used as a benchmark and user opinions are measured against it. However, in this case, the user’s opinion is taken as a benchmark against which the model is compared. About 20 examples of content are first given a 5-point label ranging from “not at all harmful” to “very toxic” by the end-user listener. Although 20 may seem like a modest amount, the team demonstrated that it was enough to train a model that anticipates the auditor labeling a considerably larger data set. After the training phase, the auditor can either continue the audit or evaluate the toxicity of additional samples to improve his model.

During the audit process, users have the freedom to select a topic from a drop-down menu or design their own unique topics for the audit. Next, IndieLabel creates a histogram displaying the instances where the Perspective API toxicity predictions for a subject differ from the user’s perspective. The auditor can examine samples to give to the developer and take notes explaining why they are or are not dangerous from the user’s point of view in order to better understand the behavior of the system.

For the IndieLabel assessment, the study team used 17 non-technical auditors. Participants in the process both repeated issues that formal audits had previously identified and raised issues that had not previously been reported, such as the under-reporting of covert acts of hate that support stigma and over-reporting. insults that marginalized groups have appropriated. Additionally, there were instances where participants’ opinions on the same audit topic differed, such as limiting the use of derogatory terms for people with intellectual disabilities.

The team also emphasizes how important it is for algorithm developers to build end-user audits into a system early in the algorithm creation process before deployment. It will be crucial for developers to pay much more attention to the communities their systems are built for and to deliberate early on how their systems will act in contentious problem areas. This will establish a direct feedback loop where complaints are sent immediately to the developer, who can modify the system before any damage is done. For example, the IndieLabel approach can be modified to look at a social media company’s feed ranking algorithm or a large company’s candidate evaluation model.

The team also aims to host end-user audits on external third-party platforms in the near future, which would require acquiring a dataset first. This will make end user audits accessible to all communities. Even though the procedure would take a long time, it might be necessary in circumstances where the algorithm developer refuses to deal with a specific problem. This would still be far better than the earlier strategy of delivering an anecdotal complaint that would get lost in the ether, even if it means the developers would now rely on public pressure to implement tweaks.


Check Paper and Reference. All credit for this research goes to the researchers on this project. Also don’t forget to register. our Reddit page and discord channelwhere we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is an intern consultant at MarktechPost. She is currently pursuing her B.Tech from Indian Institute of Technology (IIT), Goa. She is passionate about the fields of machine learning, natural language processing and web development. She likes to learn more about the technical field by participating in several challenges.


#Stanford #researchers #created #tool #put #algorithmic #auditing #hands #affected #communities

Leave a Comment

Your email address will not be published. Required fields are marked *