Welcome to HBH! If you have tried to register and didn't get a verification email, please using the following link to resend the verification email.

Artificial Intelligence - Neural Nets - Python Code Bank


Artificial Intelligence - Neural Nets
I was thinking about turning this project into an article. Maybe I\'ll do that sometime and just use this code as supplemental material for it. Anyway, this is a neural network used in AI. They are often used for classification of data into separate groups based on what the net thinks after it has been trained to learn a specific function. The code in this program can classify functions which are linearly separable, that is, there exists some line that separates the output on a graph into two groups.
                #!/usr/bin/python

# Threshold Perceptron Neural Network
#
# QUICK OVERVIEW
# Ability to learn and represent any linearly separable function
# given a supervised training set for that function. Uses a single
# layer feed forward architecture. Input is propagated to the output
# layer and the error is calculated in order to adjust the weights
# causing the neural net to learn the function by fitting the weights
# to the input-output criteria. A threshold activation is used and thus
# there is a hard linear separator between the binary output.
#
# To configure, supply the proper training data and change the number
# of input/output neurons as necessary.

import random,math

# Activation Function
def activation(x):
  \"Hi, I\'m Mr. Threshold\"
  if x > 0: return 1
  return 0

# Training data for OR gate
patterns  = [[[0.0, 0.0], [0.0]]]
patterns += [[[0.0, 1.0], [1.0]]]
patterns += [[[1.0, 0.0], [1.0]]]
patterns += [[[1.0, 1.0], [1.0]]]
# Training data for AND gate
#patterns  = [[[0.0, 0.0], [0.0]]]
#patterns += [[[0.0, 1.0], [0.0]]]
#patterns += [[[1.0, 0.0], [0.0]]]
#patterns += [[[1.0, 1.0], [1.0]]]
# Training data for NAND gate
#patterns  = [[[0.0, 0.0], [1.0]]]
#patterns += [[[0.0, 1.0], [1.0]]]
#patterns += [[[1.0, 0.0], [1.0]]]
#patterns += [[[1.0, 1.0], [0.0]]]
# Training data for NOR
#patterns  = [[[0.0, 0.0], [1.0]]]
#patterns += [[[0.0, 1.0], [0.0]]]
#patterns += [[[1.0, 0.0], [0.0]]]
#patterns += [[[1.0, 1.0], [0.0]]]
# Training data for MAJORITY
#patterns  = [[[0.0, 0.0, 0.0, 0.0, 0.0], [0.0]]]
#patterns += [[[0.0, 0.0, 0.0, 0.0, 1.0], [0.0]]]
#patterns += [[[0.0, 0.0, 1.0, 1.0, 1.0], [1.0]]]
#patterns += [[[0.0, 1.0, 1.0, 1.0, 1.0], [1.0]]]
# Training data for NOT
#patterns  = [[[0.0], [1.0]]]
#patterns += [[[1.0], [0.0]]]

class Perceptron:
  def __init__(self, ni, no):
    # number of neurons in input and output layers
    self.ni = ni + 1 # add one for bias neuron/threshold
    self.no = no
    # the neuron activation values
    self.ineurons = [1.0] * self.ni
    self.oneurons = [0.0] * self.no
    # synapses grouped by output neuron
    self.wio = [[random.uniform(-1.0,1.0)] * self.ni] * self.no
    
  def FeedForward(self, pattern):
    # inputs are first list in patterns
    # avoids changing bias activation from 1 by excluding it
    for i in range(self.ni-1):
      self.ineurons[i] = pattern[0][i]
    # calculate weighted sum for each output neuron using
    # every input neuron\'s synaptic connection
    for i in range(self.no):   # for each output neuron
      weightedSum = 0.0
      for j in range(self.ni): # consider each input neuron
	# the weighted sum for current output neuron
	weightedSum += self.ineurons[j]*self.wio[i][j]
      # calculate activation value of the weighted summation
      self.oneurons[i] = activation(weightedSum)
    
  def ThresholdLearning(self, patterns, epochs=100, lrate=0.2):
    # train over number of iterations given by epochs
    for e in range(epochs):
      # iterate over each pattern in training set
      for p in patterns:
	self.FeedForward(p)
	# target values are denoted by the second
	targets = p[1]
	# compute err for each output neuron
	for k in range(self.no):
	  err = targets[k] - self.oneurons[k]
	  # update the weights using the network error and
	  # the gradient descent algorithm applied to the sum
	  # of squared errors equation
	  for i in range(self.ni):
	    self.wio[k][i] += lrate*err*self.ineurons[i]
	    
  def QuizNet(self, patterns):
    print self.wio
    for p in patterns:
      self.FeedForward(p)
      print str(p)+\" -> \"+str(self.oneurons)

# build it
nnet = Perceptron(len(patterns[0][0]), len(patterns[0][1])) # input neurons, output neurons
# test net before it is taught
nnet.QuizNet(patterns)
# train it
nnet.ThresholdLearning(patterns)
# test it after being taught
nnet.QuizNet(patterns)
            
Comments
Sorry but there are no comments to display