Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Hazarth94843yNot sure if I can help, but Im curious from your previous post, what are your inputs? Are those vision rays or something like that?
-
No
They are parameters for a turret and a target
The idea is the model will estimate which direction and how fast to swivel the turret
Not the way I’d do it in a game but I wanted to see -
@Hazarth the thing I’m
Remembering that might be hurting it is the mixing of normalized and real values for the inputs -
Hazarth94843ymapping data directly to the input is not going to work well, Neural Networks on average don't have a good understandings of trigonometry and vector positions like x,y and velocities.
I know it feels like the net should be able to figure it out, but that's Impossible, especially in a small Network like yours
You also mix all kinds of inputs, NNs are really bad at deriving semantic information from un-nornalized data. Think about it, you have a fully connected layer joining stuff like turret Speed to reload time to maxTrackSpewd to Target Velocity...
While as a human you can think about the countless relations those have, an NN with randomized weights will just see that some values sometimes Change with others and other times they don't. Like the target going exactly away from the turret has a velocity and a Position change, but the turret should keep Shooting straight... Stuff like that will confuse the network -
Hazarth94843yRemember, garbage in = garbage out
In case of NNs garbage is anything that's not normalized and cherry picked to be strictly related.
Unless you go all in with deep networks which can do some crazy stuff... But consumer computers are just barely to handle those in a good time -
Hazarth94843ySomething that should work better is using sensors instead of your discrete values inputs.
Ray casts looking for the enemy -
@Hazarth well covariance is kind of what I'm looking for though one would think. these are just values presented the calculations have already yielded solutions but what I think I'm referring to is the idea that some of the values, like the position values might be very large compared to some of the others and their variance will likely be across the full range.
should i just try to keep the ranges of the inputs between 0 to 1 or something like that ?
and you're right it does seem like it should be able to figure this out given its functioning is derived from calculus but i still can't help but wonder if its something else... like i forgot something. -
Hazarth94843y@AvatarOfKaine definitely keep your values between 0-1
NNs are just a series of multiplications, when one input is 0.2 and another is 200 you can see how that will greatly offset the balance and blow up the weights, especially in a fully connected setup, so everything is affected by everything... which means all inputs are eventually multiplied by your huge inputs and affect everything else and so on...
also calculus is only used to learn, but as stated above, the actual network is just a series of multiplications. Not too different from fitting polynomial curves to bunch of desired outputs. the most work in there is done by the activation function, that's what really gives to shape to each (I*W) part. using the right activation function for your problem is also important though ReLU does pretty well overall, you should experiment with others to see if any of them improve your performance. -
@Hazarth well i followed the example in pytorch you'd think there would be SOME change in values being returned by the model after training.
also how many epochs should there be when i'm randomly generating training data ?
the mnist example used 100. -
Hazarth94843y@AvatarOfKaine
In my experience, it's best to create a system that trains infinitely, reports the scores to you, and allows you to cancel it at any point. the number of epochs really depends on you, the longer you let it run the better it gets, but it slows down. Best you can really do to calibrate it is to watch it learn, see when it stops improving, and the mess with the values and architecture until it gets better and better, and then just let it rip for a long time and see what you get...
I'm no expert, but it's mainly a lot of nob turning and calibrations -
@Oktokolo its not about that, its a side project to get familiar with making them and some of their pitfalls. purely personal interest.
and the answer is hell no not in a million years lol but see i'm already learning things. and it would be interesting to see how it performs..
finally.. -
@Oktokolo years back when these people kept cutting me off over and oevr like they are now claiming its the same set of years and dragging me back to a shit time instead of letting me move on to the better ones forth coming, i was quite intrigued by this and enjoyed just gradually adding to things like this.
but they stole it previously like the fucks always do.
I get this sneaky suspicion my model isn't training at all......
random