Ranter
Join devRant
Do all the things like
				++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
				Sign Up
			Pipeless API
 
				From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
				Learn More
			Comments
		- 
				
				 NoMad134245yLike, what the fuck do you want from me you stupid ass network? The pattern is there, I'm giving you proper levels of cnn followed by fc, and yet you fail to see CLEAR FUCKING PATTERNS except for that one or two pattern you really like. 😑😑😑 NoMad134245yLike, what the fuck do you want from me you stupid ass network? The pattern is there, I'm giving you proper levels of cnn followed by fc, and yet you fail to see CLEAR FUCKING PATTERNS except for that one or two pattern you really like. 😑😑😑
 
 Go fuck yourself.
- 
				
				 vane104865yYour gradient is in local minimum ? vane104865yYour gradient is in local minimum ?
 
 Just random comment from guy without university diploma that knows nothing about deep learning.
- 
				
				 NoMad134245y@vane using Adam, changed learning rate, used nadam, used adagrad, used adadelta, used sgd, neither improves really. Adam does the best so far. NoMad134245y@vane using Adam, changed learning rate, used nadam, used adagrad, used adadelta, used sgd, neither improves really. Adam does the best so far.
 
 (adagrad and adadelta don't really apply, but I tried them out anyways)
- 
				
				 vane104865yI probably know nothing about machine learning but if you cut one variable from dataset or decrease dataset size and results are the same it can tell you something 😅 vane104865yI probably know nothing about machine learning but if you cut one variable from dataset or decrease dataset size and results are the same it can tell you something 😅
- 
				
				 Root772325yHave you tried threatening? Root772325yHave you tried threatening?
 That usually works for me.
 
 Just make sure to follow through, or it won’t take you seriously.
- 
				
				Read on some article (here: https://technologyreview.com/2019/...) that for some small network configuration, the random weight assigned at start can make them have shit accuracy for some reason, even after a lot of training
 Maybe wipe it and try again?



[NN]
Day3: the accuracy has gone to shit and continues to stay that way, despite me cleaning that damn data up.
Urghhhhhhhh
*bangs head against the wall, repeatedly*
rant