Thursday, July 19, 2007

ANN Tutorial - Forward Propagation

So far, if you've been keeping track of my blog, you'll have seen lots of stuff on Artificial Neural Networks (ANNs) and what they can do, but you might not know how they work. This is what this post is going to be about; today, I'll be talking exclusively of forward propagation, or, in layman's terms the act of getting the ANN to 'think'.

If you know much about anatomy, you'll know that the brain is made up of neurons interconnected together - It is believed that the strength of these connections is what forms a memory. Now, I'm not going to get into too much biology because it'll simply bore you out of your mind, instead, let's just tackle the real thing and I'll throw-in biological references from time to time... How's that?

First of all, one question that probably didn't pop into your head is; "what does an ANN look like" - You were probably expecting it to be some absurdly abstract concept, but it isn't quite that way, it is actually feasible, and very common might I say; to represent them using flowcharts. They say a picture means a thousand words, so here goes:

First of all, I'd like to direct you to the different 'layers' of this ANN; the input, hidden and output... This is rather simple; the input layer represents input data, the hidden layer is the one that does the processing and the output layer just outputs the result as either a 1 or a 0.

Secondly, you should take note of the 'interconnections' that connect all the nodes together. The numbers you see for every red and black connector are called 'weights' and they're used to modify output from the previous node (more on that later). You need to note that the green and blue connectors also have weights, but they were omitted from the diagram as they couldn't fit in. Typically, to begin with, each weight of every connection is initialized to a random value. Personally, I like to keep it in the range of -10 to 10.

As you can see from the diagram, in this case, the input layer is made up of 3 nodes and each of them has a value of either 1 or 0 (binary). If we were to use this ANN to identify a pattern on a 3*1 bitmap of black and white pixels (black being 0 and white being 1), each input node would hold the value of one of the pixels. Then, each of these values would be processed, this is where the weights come in; each output from the input nodes (1 or 0) is distorted in some way by the weight of each of the connectors; this is done via a simple multiplication.

Let's assume that the first pixel in our 3*1 bitmap was white, looking back at our diagram, we can say that the red input node will be responsible for it. Let's go through this process shall we; first of all, since the first pixel is white, the input node will output a 1. This value will then be propagated to each of the nodes in the hidden layer via different connections (the red ones).
The top hidden node, for example, will receive a value of -5.2 because the weight of the connector (-5.2) multiplied by the input (1) gives -5.2.

This process is done for each of the input nodes, and, once this is completed, each hidden node will evaluate the sum of all the 'weighed' values which they have received from each of the input nodes. If that sum is greater than a specified threshold (in most cases something like 0.1); then that hidden node will output a 1, otherwise, it will output a 0.

Once that's done, this output from each of the hidden nodes (1 or 0) will again be weighed by the connection between the hidden and output layer. Again, this is just a simple multiplications... Once this is completed, the output node will find the sum of all the weighed values and if it is above the threshold, then that output node will output a 1 (true) or 0 (false).

In the examples which I posted before, I designed an artificial neural network to associate certain drawings with 1 and others with 0; since pretty much anything can be represented by binary, you might want to play around with that.

For those of you interested in biology; as we've heard, the brain is made up of neurons interconnected together. Well, it is believed that the strength of each of these connections is what determines the strength of each memory unit that it holds; that's probably why scientists say that the brain is a muscle; if your neurons are buff in some areas, then these areas are going to represent the most prominent aspect of your intelligence. If only we knew what each of these memory units actually were we would probably be able to better understand, and thus exercise much more control over our ANNs.

In this case, I have showed you what is known as a step ANN because the outputs of each node is only either a 1 or 0. There is another type of ANN which performs sigmoid calculations... You might want to do some research on that. Also, you can be pretty creative with your ANNs; each layer can have variable amounts of nodes; you don't have to have the same number of hidden nodes as input nodes and you could have more than one output node if you liked, also, you could go as far as having more than one hidden layer... Think big!

Ok, this concludes the tutorial on forward propagation. Next time, I'll be talking about back-error propagation which will teach you a way to teach your ANN - Just to get you thinking; it's done by altering the weights of each interconnection by comparing the desired output with the actual final output... As the name of the algorithm suggests, it involves starting at the output node and 'correcting' the weights of each successive connector as you make your way back through the layers.

9 comments:

Alex said...

I don't see how the pattern recognition would recognize anything more than the amount of black/white. Since it just has each pixel input going to the hidden layer, where in the end they're all summed together. So how would it know the difference between a cross and a tick of each of them had the same area.

Sorry if I don't understand how your neural network functions, I'm very new to them.

But nonetheless, very interesting stuff, and great work.

Keith said...

It'll make for an interesting read when I've got some time. My housemate tried to do ANNs for his final year project year before last at Uni, and I've got my own "AI" stuff to plan out for a game I want to make (where decision making is worryingly important).

Nice write up Alphabit. I think it'll give me some food for thought.

Oh. An aside. We still allowed to use your Image Distort class freely just for a comment in the credits?

Alex said...

Wow, what major are Jon/Keith doing with the neural networks?

It sounds like interesting stuff, but I'm still a little bit at a loss as to how it really works.

Jon said...

To Keith:
Yes you may use my image distort class if you credit me. I'm not going to be after you if you don't, but I'll appreciate it very much.

To Alex:
The thing is that it's not so much the ratio or black to white pixels that matter, but rather, how these pixels are laid-out. You need to keep in mind that the ANN uses both negative and positive numbers to adjust the inputs; this means that a certain pixel in a certain area will be processed differently across each hidden node. Because that is the case, there is a much greater range of possibilities allowable. Somehow, this allows the system to recognize a 'loose/relational' pattern.

As for which major I'm doing... I'm going-off to university next year for a Bachelor in IT. I take is as a compliment that you believed I had so much merit, thanks.

I've been programming since I was 14 and it's been over 3 years since. I've learned about ANN theory while in my last year of school and I had to figure out how to implement it on my own.

It's not all flowers and butterflies though; most people would probably have preferred getting a heart attack than working as hard as I did in the past 3 years. My social life is as active as that of a rock...

Keith said...

Jon:
Thanks. I'll use it whilst converting my terrible GTA style engine from AS2 to AS3. Always wanted distorted bitmap textures for the walls. Will be good practice before I use Papervision I guess.

If you know any books out there I should be buying, bar the Foundation one Friends of ED will be releasing sometime later in the year, let me know)

Alex:
My housemate was doing Computer Games Programming at the Uni of Teesside in the UK. I've since graduated and now I just make Flash stuff for a web company, but want to make an emotion engine in Flash at some point.

Alex said...

I sort of get the whole neural network thing now. Thanks for the explanation. I decided to start on writing my own, and I'm now in the process of debugging it! (My implementation has your blog as a theoretical basis, but the code isn't really influenced by it).

I plan to major in Computer Engineering, hopefully with an emphasis on biological computation.

Your school teaches neural network theory? The biotechnology class at my high school was taught by a sewing teacher.

One thing I'm curious about is, does the brain really work this way? Or does it even come close to working this way? Neural networks don't seem to be capable of the complex reasoning that underlies human consciousness.

Alex said...

Oh, one thing I was thinking with the neural networks. They can't notice if an object or shape is moved over left, right, etc.

But if I see an A, I see that it's an A whether its up-side down, facing, left, facing right, on the left side of my vision, on the right side of my vision, anywhere.

Yet the neural network seems to be fixated on a small scope, unable to notice that something has rotated.

I can't even begin to fathom a computer program that can recognize a shape as one shape at any angle, size, or color. I can't imagine any way to know, seeing someones head from an entirely original angle, that that is a familiar person.

psychic said...

No matter where your life is now, Find out what is up ahead!

If you are feeling uncertain about any aspect of your life or simply just want to know what is up ahead give us a call and be pleasantly surprised with our candid readings.

No matter what your concern is, we are here to help you with our skills as Psychics, Astrologers and Tarot Readers. Whether your concern is Financial, Romantic, Employment or just every day stressors that can make life less tolerable, we can look at these and more to assist you in finding your true path to fulfillment and happiness by utilizing Astrological, Psychic or Tarot readings.

Read More at www.unlimitedpsychicreadings.com

Kezya Wulandari said...

I recently came accross your blog and have been reading along.I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.
Please don't forget to visit My Blog
Best Android
KitKat Launcher
Big Launcher
Anti Virus Apk
YouTube Downloader Pro Apk
Thankyou ;)