If you know much about anatomy, you'll know that the brain is made up of neurons interconnected together - It is believed that the strength of these connections is what forms a memory. Now, I'm not going to get into too much biology because it'll simply bore you out of your mind, instead, let's just tackle the real thing and I'll throw-in biological references from time to time... How's that?
First of all, one question that probably didn't pop into your head is; "what does an ANN look like" - You were probably expecting it to be some absurdly abstract concept, but it isn't quite that way, it is actually feasible, and very common might I say; to represent them using flowcharts. They say a picture means a thousand words, so here goes:

Secondly, you should take note of the 'interconnections' that connect all the nodes together. The numbers you see for every red and black connector are called 'weights' and they're used to modify output from the previous node (more on that later). You need to note that the green and blue connectors also have weights, but they were omitted from the diagram as they couldn't fit in. Typically, to begin with, each weight of every connection is initialized to a random value. Personally, I like to keep it in the range of -10 to 10.
As you can see from the diagram, in this case, the input layer is made up of 3 nodes and each of them has a value of either 1 or 0 (binary). If we were to use this ANN to identify a pattern on a 3*1 bitmap of black and white pixels (black being 0 and white being 1), each input node would hold the value of one of the pixels. Then, each of these values would be processed, this is where the weights come in; each output from the input nodes (1 or 0) is distorted in some way by the weight of each of the connectors; this is done via a simple multiplication.
Let's assume that the first pixel in our 3*1 bitmap was white, looking back at our diagram, we can say that the red input node will be responsible for it. Let's go through this process shall we; first of all, since the first pixel is white, the input node will output a 1. This value will then be propagated to each of the nodes in the hidden layer via different connections (the red ones).
The top hidden node, for example, will receive a value of -5.2 because the weight of the connector (-5.2) multiplied by the input (1) gives -5.2.
This process is done for each of the input nodes, and, once this is completed, each hidden node will evaluate the sum of all the 'weighed' values which they have received from each of the input nodes. If that sum is greater than a specified threshold (in most cases something like 0.1); then that hidden node will output a 1, otherwise, it will output a 0.
Once that's done, this output from each of the hidden nodes (1 or 0) will again be weighed by the connection between the hidden and output layer. Again, this is just a simple multiplications... Once this is completed, the output node will find the sum of all the weighed values and if it is above the threshold, then that output node will output a 1 (true) or 0 (false).
In the examples which I posted before, I designed an artificial neural network to associate certain drawings with 1 and others with 0; since pretty much anything can be represented by binary, you might want to play around with that.
For those of you interested in biology; as we've heard, the brain is made up of neurons interconnected together. Well, it is believed that the strength of each of these connections is what determines the strength of each memory unit that it holds; that's probably why scientists say that the brain is a muscle; if your neurons are buff in some areas, then these areas are going to represent the most prominent aspect of your intelligence. If only we knew what each of these memory units actually were we would probably be able to better understand, and thus exercise much more control over our ANNs.
In this case, I have showed you what is known as a step ANN because the outputs of each node is only either a 1 or 0. There is another type of ANN which performs sigmoid calculations... You might want to do some research on that. Also, you can be pretty creative with your ANNs; each layer can have variable amounts of nodes; you don't have to have the same number of hidden nodes as input nodes and you could have more than one output node if you liked, also, you could go as far as having more than one hidden layer... Think big!
Ok, this concludes the tutorial on forward propagation. Next time, I'll be talking about back-error propagation which will teach you a way to teach your ANN - Just to get you thinking; it's done by altering the weights of each interconnection by comparing the desired output with the actual final output... As the name of the algorithm suggests, it involves starting at the output node and 'correcting' the weights of each successive connector as you make your way back through the layers.