AI21 Labs Introduces In-Context RALM: Adding Ready-Made External Knowledge Sources to Existing Language Models
AI21 Labs Proposes A New Method Called \”In-Context RALM\” That Can Add Ready Made External Knowledge Sources to The Existing Language Modell
Recent advances in language modeling research have led to the expansion of machine-generated texts into previously untapped areas. The fact that LM generated text is often rife with inconsistencies or factual errors remains a major issue. The problem can occur in any LM-generation scenario. However, it becomes more problematic when the LM is required to generate text in unusual domains or with up-to date information.
This problem can be solved by Retrieval – Augmented Language Modeling methods (RALM), which show the LM relevant documents from a grounded database during generation. Current RALM strategy focuses on changing the LM structure to include external data. This approach can be very complex. AI21 Labs (an organization that develops artificial-intelligence systems) developed a strategy to solve this problem. It is called In-Context Retrieval -Augmented Language Modeling, which allows an existing language model to be supplemented with external information. The files are inputted into the language models, which does not affect the underlying LM structure. The team published its findings in a paper entitled \”In-Context retrieval-Augmented Language Models.\”
AI21 Labs announced Wordtune Spices in the same publication. This is an add-on to their Wordtune Text Editor. Wordtune Spices helps authors quickly generate text and create contents, thus accelerating the creation of academic papers and creative documents. Spices’ principle is based upon the In-context RALM method. Spices users have 12 prompt options, which include explications and definitions. They can also choose jokes. The user can choose the prompt which best fits their situation and get a series of sentences that will help them to support their argument and provide more details.