On a sidenote, related to the use of no_grad. What I mean is that, for example, wrapping layer.weight = w in no_grad won’t work if there is no parametrization. The second assignation throws KeyError: "attribute 'weight' already on the thread you link, emphasizes the use of no_grad (I think) for not messing up the grad calculation, but he doesn’t address what is a proper assignation. In practice, you would split the training set into train and validation and also hold back a test set for final model evaluation. I can change the values of the simulation and I can either retrain that parameter or, in this case, I want to set it to focus on something If I use the Parameter wrapper on the assignation it won’t work if it has a parametrization. The first involves you manually splitting your training data into a train and validation dataset and specifying the validation dataset to the fit(). My usecase is in the context of a parameter that represents a physical value of a simulation, so essentially it’s a modeling problem. Is there a “proper” way of assigning an arbitrary tensor to a parameter that works in these two cases? Then, if I have a model definition, it must be dependent on whether it is reparametrized or not. Register_parametrization(linear, 'weight', Identity()) Linear.weight = torch.ones_like(linear.weight) import torchįrom torch.nn.utils.parametrize import register_parametrization This issue arose for me in the context of using a reparametrization, when one is used, one can assign the parameter directly without the use of. I have read that this is discouraged, what would be the proper way of doing this? A parameter can be set to an arbitrary tensor by use of the. All nn.Parameter weights are automatically added to net.parameters(), so when you do training like optimizer optim.SGD(net.parameters(), lr0.01), the fixed weight will not be changed.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |