Chatbot Reporter will maintain a running list of terms here. Last updated, May 28, 2023
Artificial intelligence
Artificial Intelligence (AI) is a branch of computer science focused on the development of systems that can reason, learn, and act autonomously, seeking to perform tasks that require human intelligence. Using algorithms, models and other programs, AI works to enable computers to process data, language, and other information to reason and make decisions. From playing games to delivering medical diagnosis, AI has demonstrated exceptional abilities at problem solving, and shows enormous potential to replicate or augment human intelligence.
Large language model
Large language models, or LLMs, are AI models that generate text responses that mirror human capacity. LLMs are trained on an extensive amount of text, which they use to help predict the next word or sequence of words that might come next, given the context the user provided.
OpenAI’s ChatGPT and Google’s Bard are both examples of LLMs. These models are defined as “large” because of the vast amount of text data on which they are trained. This can include books, articles, websites, chat conversation, social media sites, computer code and other sources. The LLMs scour these sources to learn patterns and relationships between trillions of words, enabling them to generate content covering a variety of topics and styles.
Machine learning
A field of AI, machine learning focuses on creating algorithms and other programs that help computers learn without being instructed on every task. In machine learning, computers study data or examples to improve its performance on a specific task. It’s used in a wide range of AI functions where data-driven decision making is needed, including image and speech recognition, natural language processing, recommendation systems, fraud detection, autonomous vehicles, and healthcare diagnostics.
Natural language processing
A primary area of AI research and development, natural language processing (NLP) focuses on the interaction between computers and human language. NLP involves using computer programs, algorithms and other techniques to enable computers to understand, interpret and generate text that is useful and meaningful.
Transformer
A transformer is a type of neural network architecture that works by allowing it to assign different weights to words related to their context, regardless of their distance from each other in the text. Proposed in 2017, transformers have become a cornerstone in the development of advanced AI models and have been especially influential in the progress of natural language processing. Transformers have demonstrated exceptional performance in tasks like machine translation, text summarization, sentiment analysis, and question answering.