Stars
Loki: Open-source solution designed to automate the process of verifying factuality
Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Transformer related optimization, including BERT, GPT
A collection of localized (Korean) AWS AI/ML workshop materials for hands-on labs.
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
We introduce new zero-shot prompting magic words that improves the reasoning ability of language models: panel discussion!
Using Tree-of-Thought Prompting to boost ChatGPT's reasoning
This is a workshop designed for Amazon Bedrock a foundational model service.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
Must-read papers on prompt-based tuning for pre-trained language models.
A novel embedding training algorithm leveraging ANN search and achieved SOTA retrieval on Trec DL 2019 and OpenQA benchmarks
Code and data of ACL 2021 paper "Few-NERD: A Few-shot Named Entity Recognition Dataset"
Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
code for EMNLP 2019 paper Text Summarization with Pretrained Encoders
Repository for the code associated with the paper: Unsupervised Extractive Summarization using Mutual Information
Unsupervised Extractive Summarization based on Position-Augmented Centrality
Financial Sentiment Analysis with BERT
Avalanche: an End-to-End Library for Continual Learning based on PyTorch.