Using Natural Language Patches to Correct Systematic Issues in Neural Models

Researchers from Stanford and Microsoft have proposed an Artificial Intelligence (AI), which uses declarative statements as corrective feedback for neural models with bugs.

The methods used today to fix systematic problems in NLP models can be fragile, time-consuming or prone to shortcuts. Humans on the otherhand frequently criticize each other using natural language. Recent research has focused on the use of natural language patches. These are declarative statements which enable developers to provide corrective feedback by modifying the model, or adding missing information.

There is an increasing body of research that uses language instead of labeled examples to provide models with instructions, supervision and even inductive biases. Examples include building neural representations using language descriptions (Andreas, 2018; Mu, 2020, Murty, 2020) or language-based zero shot learning (Brown, 2020, Hanjie, 2022, Chen, 2021). When the user is interacting with a model in order to improve it, language has not been properly used.

The neural language patching models has two heads. A gating head determines whether a patch is needed and an interpreter that forecasts the results based on information contained in the patch. The model is developed in two stages: the first step involves a dataset with tags, and the second stage is task-specific tuning. During the second step of fine-tuning, a set of template patches is used to create synthetically labeled samples and patches.

Source:

Researchers From Stanford And Microsoft Have Proposed An Artificial Intelligence (AI) Approach That Uses Declarative Statements As Corrective Feedback For Neural Models With Bugs

Leave a Reply

Your email address will not be published. Required fields are marked *