Motivation
After successfully creating the clap-activated switch, I became intrigued by the possibility of controlling devices with voice commands. Recognizing voice detection was complex, I aimed initially to master fundamental neural network principles through simplified examples.
Objectives & Goals
My goal was to understand the underlying mechanisms of neural networks and visualize how these structures adapt during learning, laying groundwork for future audio and speech recognition tasks.
Solution & Implementation
I coded a multi-layer perceptron entirely in Java and built a JavaFX interface for visualization. Neurons appeared as nodes, and connection weights were visualized dynamically as edges varying in thickness and color, clearly indicating their evolution through backpropagation training. Initially, the network learned basic logical operations like XOR, verifying its correct implementation.
Results & Achievements
While stopping short of full voice recognition when I understood the complexity of what I was trying to achieve, the system successfully learned simple logic gates, validating my neural network implementation. The dynamic visualization helped me understand the weight adjustment process intuitively.
Learnings & Reflections
This project was pivotal for my understanding of neural network basics and visualization techniques as well as audio processing using Fourier transforms in conjunction with CNNs in theory even though I never implemented that myself, shaping my future approaches to complex ML problems in subsequent projects.