Tuesday, January 19, 2010

Neural networks


A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

The study of artificial neural networks[127] began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perceptron and Paul Werbos who developed the backpropagation algorithm.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks. Among recurrent networks, the most famous is the Hopfield net, a form of attractor network, which was first described by John Hopfield in 1982.[136] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning and competitive learning.

Jeff Hawkins argues that research in neural networks has stalled because it has failed to model the essential properties of the neocortex, and has suggested a model (Hierarchical Temporal Memory) that is based on neurological research.

Control theory

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.

Languages

AI researchers have developed several specialized languages for AI research, including Lisp and Prolog.

No comments:

Post a Comment