Self.layers = sigmoid(((str(layer)) ("hidden" + str(layer - 1))) + (str(layer)))
Self.bias = np.zeros((self.output_neurons, 1)) Self.bias = np.zeros((self.hidden_neurons, 1)) Self.weights = np.random.uniform(low=(-1/(self.output_neurons ** 0.5)), high=(1/(self.output_neurons ** 0.5)), size=(self.output_neurons, self.hidden_neurons))įor hidden_layer in range(self.hidden_layers): Self.weights = np.random.uniform(low=(-1/(self.hidden_neurons ** 0.5)), high=(1/(self.hidden_neurons ** 0.5)), size=(self.hidden_neurons, self.hidden_neurons)) Self.weights = np.random.uniform(low=(-1/(self.input_neurons ** 0.5)), high=(1/(self.input_neurons ** 0.5)), size=(self.hidden_neurons, self.input_neurons))įor weight in range(1, self.hidden_layers): I did read some similar questions here but the answers didn't help all too much, any ideas as to why this happens? class Network: Initially the values going into my sigmoid function were too large saturating it so I divide by 255 to get inputs in the range 0 to 1. The input is an image, 28 x 28, each pixel having a value in the range 0 to 255. I've fiddled with the number of hidden neurons and layers, but increasing them only makes the issue worse (output has no differences). Regardless of the image inputted, the output is incredibly similar so the neural network recognizes only a specific number every time. So this is a totally untrained network, but the results I'm seeing now are not normal I believe. All the network does is give an output for an input with randomly generated weights in the range
#XVID4PSP MAKE INPUT AND OUTPUT BITRATE THE SAME CODE#
The following code is what I've done so far, I haven't written the back propagation function yet.