2

ohhhhh I am pissseddddddddddd

itss the fucking pytorch.module class it would seem !

I do exactly the same goddamn shit as its supposed to do in a goddamn notebook and run it step by step and the fucking model trains and the output values change !!!! and the loss decreases !!

I do this in the goddamn class derived from model with a call to model.parameters() and the fucker fails !!!

why ???
why ?????
why ??????
is it cloning the goddamn parameters so the references aren't there ????
seems to work goddamn fine when i call a layer and activation function at a goddamn time chaining the calls one after another !!!!!!

UGHHHH IT LOOKS LIKE IF YOU DEFINE THE LOSS AND OPTIMIZER OUTSIDE THE FUCKING CLASS IN A SEPERATE TRAINING FUNCTION IT DOESN'T TRAIN !!!!!!

WHY ??
A REFERENCE IS A GODDAMN REFERENCE !!!!

Comments
Add Comment