HomeArtificial IntelligenceArtificial Intelligence Terms You Need to Know

Artificial Intelligence Terms You Need to Know

Artificial IntelligenceAs artificial intelligence is becoming a precise ideology now a days, so it’s important to understand the AI terms. Inthis journal entry we will showcasing the gallery of artificial intelligence terms you need to know.

Artificial Intelligence Terms

#Agents: Agents are also known as bots or droids. These are autonomous software programs that respond to their environment. They also act on behalf of humans to accomplish a systemā€™s target function.

#Algorithms: A specific set of rules or instructions given to an AI, neural network, or other machines. These instructions determine how that program analyses and help it learn on its own. Algorithms uses data to complete its specified task (target function).

#Artificial intelligence: A machineā€™s ability to make decisions and solve difficult problems which humans (and animals) routinely solve.

#Artificial Neural Network (ANN): A learning model created to act or mimic the neural network configurations like a human brain that solves tasks that are too difficult for traditional computer systems to solve.

#Autonomic Computing: Also known as AC, refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.

#Bayesian Network: A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG).

#Chatbots: A chat robot (chatbot for short) that is designed to simulate a conversation with human users by simulating how humans would behave as a conversational partner. This can be through text chats, voice commands, or both.

#Classification: Algorithms that assign a category to a data point based on the training data.

#Clustering: A method of unsupervised learning and common statistical data analysis technique. Clustering algorithms let machines group data points that show similarities to each other into groups (called clusters).

#Cognitive Computing: A model mimicing in the same way as the human brain thinks. This model involves the following to achieve the same: self-learning through the use of data mining, natural language processing, and pattern recognition.

#Convolutional Neural Network (CNN): A type of neural networks that identifies and makes sense of images.

#Computational Creativity: A multidisciplinary research area that draws on the fields of art, science, philosophy and artificial intelligence to engineer computational systems that are able to model, stimulate and replicate human creativity.

#Data: Information collected and converted into a digital form.

#Data Mining: The examination of data sets to identify data and extract patterns from that data which can be of further use.

#Data Science: An interdisciplinary field that combines scientific methods, systems, and processes from statistics, information science, and computer science to provide insight into phenomenon via either structured or unstructured data.

#Decision Model: A model that predicts what should happen if a certain action is taken using prescriptive analytics. The model also assesses the relationships between the elements of a decision to recommend the possible courses of action.

#Decision Tree: A tree and branch-based model used to map decisions and their possible consequences.

#Deep Learning: Deep learning can find super complex patterns in data sets by using multiple layers of correlations. In the simplest of terms, it does this by mimicking the way neurons are layered in your own brain. Thatā€™s why computer scientists refer to this type of machine learning as a ā€œneural network.ā€

#Descriptive Model: A summary of a dataset that describes its main features and quantifies relationships in the data. Some common measures used to describe a data set are measures of central tendency (mean, median and mode).

#Fluent: A type of condition that can change over time.Ā In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.

#Genetic Algorithm: An evolutionary algorithm based on principles of genetics and natural selection and biological evolution. The algorithm randomly selects pairs of individuals from the population. Then these individuals are crossed to create a new generation of two individuals, or children. This algorithm is used to find the optimal or near optimal solutions that would otherwise take decades to solve.

#Heuristic Search Techniques: Support that narrows down the search for optimal solutions for a problem by eliminating options that are incorrect.

#Inductive Reasoning: The ability to derive key generalised conclusions or theories by analysing patterns in a large data set.

#Inductive Logic Programming: An approach to machine learning whereby hypothesised logic is developed based on known background knowledge and a set of both positive and negative examples of what is and isnā€™t true.

#Knowledge Engineering: Focuses on building knowledge-based systems, including all of the scientific, technical, and social aspects of it.

#Logic Programming: A programming paradigm where computation is carried out based on the knowledge repository of rules and facts.

#Machine Intelligence: An umbrella term that encompasses machine learning, deep learning, and classical learning algorithms.

#Machine Learning: ā€ÆA subgenre of AI which focuses on algorithms that can be designed to “learn” without being programmed and change when exposed to new data.ā€ÆSuch programs can use past performance data to predict and improve future performance.

#Machine Perception: The systems ability to receive and interpret data from the outside world in a similar fashion how humans use their senses.This is typically done with attached hardware, though software is also usable.

#Natural Language Generation (NLG): A machine learning task in which an algorithm attempts to generate language that is comprehensible and human-sounding. The end goal is to produce computer-generated language that is indiscernible from language generated by humans.

#Natural Language Processing:Ā A machine learning task concerned with recognizing human communication as it is meant to be understood.

#Optical Character Recognition (OCR): A computer system that takes images of typed, handwritten or printed text and converts them into machine-readable text.

#Overfitting: A machine learning problem whereby an algorithm is unable to discern information relevant to its assigned task from information which is irrelevant to its assigned task within training data. Overfitting therefore inhibits the algorithmā€™s predictive performance when dealing with new data.

#Parameter: They used to help define a system as an event, thing, person, project, or situation. In AI, parameters are used to clarify exactly what an algorithm should be seeking to identify as important data when performing its target function.

#Predictive Analysis:Ā Analysing the past and the current data to look for patterns that can help make predictions about future events or performance.

#Recurrent Neural Network (RNN):Ā A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. This allows it to exhibit dynamic temporal behavior.

#Regression: A statistical measure for estimating the relationships among dependent and independent variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or ‘predictors’).

#Supervised Learning: A type of machine learning in which human input and supervision trains the machine to generate the desired algorithms. This is more common than unsupervised learning.

#Unsupervised Learning: A type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. In unsupervised learning algorithm human input and supervision are extremely limited, or absent altogether. The most common unsupervised learning method is cluster analysis.

RELATED ARTICLES

Most Popular