Pytorch lobpcg 不适用于复杂的稀疏数组?

Pytorch lobpcg not working with complex sparse arrays?

提问人:Oti Dioti 提问时间:9/15/2023 最后编辑:Oti Dioti 更新时间:9/15/2023 访问量:49

问:

我目前正在尝试使用 Pytorch“lobpcg”使用我的 GPU 对角化大型稀疏数组。当对角化由实数值组成的数组时,该函数似乎工作得很好,但是每当数组包含复杂值时,它都会返回一些令人困惑的错误消息。作为参考,我在这里留下了我的代码片段和返回的错误消息

from scipy import sparse
import torch
from torch import lobpcg
import numpy as np
from scipy import sparse 
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
#%%
def Torchfy(arr): # storese sparse arrays in gpu
   arr = arr.tocoo() # Converting to "coordinate format" (type of way to store matrix in the GPU)
   # Here we create sparse pythorch tensor feeding in the indices: device was defined in first cell
   arr = torch.sparse_coo_tensor(indices=torch.tensor([arr.row, arr.col]), values=torch.tensor(arr.data), size=arr.shape).to(device)
   return arr
def H_L(xcoor, ycoor, zcoor, b = 1, g1 = 1, g2 = 1, g3 = 1, e = 1, m = 1, hbar = 1):
########################################### Pre Calculations ###############################################
   h2 = hbar ** 2 # hbar sqrd
   c1 = - h2/(2*m) * (g1 + 2.5 * g2) # setting constant for first term
   c2 = g2 * h2 / m # setting constant for second term
   c3 = g3 * h2/(2*m) # setting constant for third term
   e2 = e ** 2 # e-charge sqrd
   eh = e / hbar
   e2h2 = e2 / h2
   y = 0.5 * b * sparse.spdiags(ycoor.reshape(N3), np.array([0]), N3, N3) # repeated, put in diagonal form (N^3,N^3) 
   x = 0.5 * b * sparse.spdiags(xcoor.reshape(N3), np.array([0]), N3, N3) # repeated, put in diagonal form (N^3,N^3) 
   y2 = y ** 2
   x2 = x ** 2
############################################ First Term ####################################################
   lapl = ddxx + ddyy + ddzz # Laplacian
   a_nabl = -1j * eh * (x * ddy - y * ddx)  # A(r) * Nabla, where A generates B = b * \hat{z}
   a2 = e2h2 * (y2 + x2)
   tmp1 = c1 * (- lapl + a_nabl + a2) # summing the terms relevant to tmp1
   tmp1 = sparse.kron(tmp1, np.eye(4)) # No J_i coupling so we consider kron with identity 
############################################ Second Term ###################################################
   kx2 = - ddxx + 1j * eh * y * ddx + e2h2 * y2
   ky2 = - ddyy - 1j * eh * x * ddy + e2h2 * x2
   tmp2 = sparse.kron(kx2, J_x2) + sparse.kron(ky2, J_y2) + sparse.kron(-ddzz, J_z2)
############################################ Third Term #####################################################
   kxky = 0.5 * (- 2 * ddxy - 2 * e2h2 * x * y - 1j * eh *( x * ddx - y * ddy))
   kxkz = 0.5 * (- 2 * ddxz + 1j * eh * y * ddz)
   kykz = 0.5 * (- 2 * ddyz - 1j * eh * x * ddz)
   tmp3 = sparse.kron(kxky, jxjy) + sparse.kron(kxkz, jxjz) + sparse.kron(kykz, jyjz)
   return tmp1 + tmp2 + tmp3
h = H_L(X, Y, Z)
h = Torchfy(h)
# Using the "Locally optimal block preconditioned conjugate gradient method" to obtain the first 10 eigenvalues
# we focus on the lowest ones (i.e. largest = False)
eigenvalues, eigenvectors = lobpcg(h, k=10, largest=False)

运行上述代码的完整版本后,我收到错误消息:RuntimeError:预期标量类型 ComplexDouble 但找到 Float。有什么解决办法吗?
提前致谢

编辑:
运行 h = torch.complex(h.real, h.imag).to(device) 我收到错误消息:NotImplementedError:无法使用来自“SparseCPU”后端的参数运行“aten::view_as_real”。这可能是因为此后端不存在运算符,或者在选择性/自定义生成过程中省略了运算符(如果使用自定义生成)。如果您是在移动设备上使用 PyTorch 的 Facebook 员工,请访问 https://fburl.com/ptmfixes 了解可能的解决方案。“aten::view_as_real”仅适用于以下后端:[CPU、Meta、BackendSelect、Python、FuncTorchDynamicLayerBackMode、Functionalize、Named、Conjugate、Negative、ZeroTensor、ADInplaceOrView、AutogradOther、AutogradCPU、AutogradCUDA、AutogradHIP、AutogradXLA、AutogradMPS、AutogradIPU、AutogradXPU、AutogradHPU、AutogradVE、AutogradLazy、AutogradMeta、AutogradMTIA、AutogradPrivateUse1、AutogradPrivateUse2、AutogradPrivateUse3、AutogradNestedTensor、Tracer、AutocastCPU、AutocastCUDA、FuncTorchBatched、FuncTorchVmapMode、Batched、VmapMode、FuncTorchGradWrapper、PythonTLSSnapshot、FuncTorchDynamicLayerFrontMode、PythonDispatcher]。

CPU:在 /opt/conda/conda-bld/pytorch_1682343904035/work/build/aten/src/ATen/RegisterCPU.cpp:31034 [内核] Meta:在 /opt/conda/conda-bld/pytorch_1682343904035/work/build/aten/src/ATen/RegisterMeta.cpp:26824 [内核] BackendSelect:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [后端回退] Python:在/opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [后端回退] FuncTorchDynamicLayerBackMode:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/DynamicLayer.cpp:491 [后端回退] 功能化:在 /opt/conda/conda-bld/pytorch_1682343904035/work/build/aten/src/ATen/RegisterFunctionalization_2.cpp:21384 [内核] 命名:注册于/opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [后端回退] 共轭:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/ConjugateFallback.cpp:21 注册的 fallthrough:否定:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/native/NegateFallback.cpp:23 注册的 fallthrough] ZeroTensor:在 /opt/conda/conda-bld/pytorch_1682343904035/work/ 注册的 fallthroughaten/src/ATen/ZeroTensorFallback.cpp:90 [内核] ADInplaceOrView:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/ADInplaceOrViewType_0.cpp:4733 [内核] AutogradOther:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradCPU:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_ 注册4.cpp:15288 [autograd 内核] AutogradCUDA:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradHIP:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradXLA:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd内核] AutogradMPS:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradIPU:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradXPU:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradHPU: 在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradVE:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradLazy:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradMeta: 在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradMTIA:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradPrivateUse1:在 /opt/conda/con 注册da-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 [autograd 内核] AutogradPrivateUse2:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 [autograd 内核] AutogradPrivateUse3:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 注册 AutogradNestedTensor:注册于/opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/VariableType_4.cpp:15288 [autograd 内核] Tracer:在 /opt/conda/conda-bld/pytorch_1682343904035/work/torch/csrc/autograd/generated/TraceType_4.cpp:12892 注册 [内核] AutocastCPU:fallthrough 注册于 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/autocast_mode.cpp:487 [后端回退] AutocastCUDA:fallthrough 注册于/opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/autocast_mode.cpp:354 [后端回退] FuncTorchBatched:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/BatchRulesUnaryOps.cpp:82 注册 [内核] FuncTorchVmapMode:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 注册的 [后端回退] 已批处理:注册于/opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/LegacyBatchingRegistrations.cpp:1077 [内核] VmapMode:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [后端回退] FuncTorchGradWrapper:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/TensorWrapper.cpp:210 注册 [后端回退] PythonTLSSnapshot:注册于/opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [后端回退] FuncTorchDynamicLayerFrontMode:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/functorch/DynamicLayer.cpp:487 [后端回退] PythonDispatcher:在 /opt/conda/conda-bld/pytorch_1682343904035/work/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [后端回退]

python pytorch gpu 复数 特征值

评论


答: 暂无答案