tensorcircuit.experimental#
Experimental features
- tensorcircuit.experimental.adaptive_vmap(f: Callable[[...], Any], vectorized_argnums: Union[int, Sequence[int]] = 0, static_argnums: Optional[Union[int, Sequence[int]]] = None, chunk_size: Optional[int] = None) Callable[[...], Any] [源代码]#
- tensorcircuit.experimental.dynamics_matrix(f: Callable[[...], Any], *, kernel: str = 'dynamics', postprocess: Optional[str] = None, mode: str = 'fwd') Callable[[...], Any] #
- tensorcircuit.experimental.dynamics_rhs(f: Callable[[...], Any], h: Any) Callable[[...], Any] [源代码]#
- tensorcircuit.experimental.evol_global(c: Any, h_fun: Callable[[...], Any], t: float, *args: Any, **solver_kws: Any) Any [源代码]#
ode evolution of time dependent Hamiltonian on circuit of all qubits [only jax backend support for now]
- tensorcircuit.experimental.evol_local(c: Any, index: Sequence[int], h_fun: Callable[[...], Any], t: float, *args: Any, **solver_kws: Any) Any [源代码]#
ode evolution of time dependent Hamiltonian on circuit of given indices [only jax backend support for now]
- tensorcircuit.experimental.finite_difference_differentiator(f: Callable[[...], Any], argnums: Tuple[int, ...] = (0,), shifts: Tuple[float, float] = (0.001, 0.002)) Callable[[...], Any] [源代码]#
- tensorcircuit.experimental.hamiltonian_evol(tlist: Any, h: Any, psi0: Any, callback: Optional[Callable[[...], Any]] = None) Any [源代码]#
Fast implementation of static full Hamiltonian evolution (default as imaginary time)
- 参数
tlist (Tensor) -- _description_
h (Tensor) -- _description_
psi0 (Tensor) -- _description_
callback (Optional[Callable[..., Any]], optional) -- _description_, defaults to None
- 返回
Tensor
- 返回类型
result dynamics on
tlist
- tensorcircuit.experimental.parameter_shift_grad(f: Callable[[...], Any], argnums: Union[int, Sequence[int]] = 0, jit: bool = False, shifts: Tuple[float, float] = (1.5707963267948966, 2)) Callable[[...], Any] [源代码]#
similar to grad function but using parameter shift internally instead of AD, vmap is utilized for evaluation, so the speed is still ok
- 参数
f (Callable[..., Tensor]) -- quantum function with weights in and expectation out
argnums (Union[int, Sequence[int]], optional) -- label which args should be differentiated, defaults to 0
jit (bool, optional) -- whether jit the original function f at the beginning, defaults to False
shifts (Tuple[float, float]) -- two floats for the delta shift on the numerator and dominator, defaults to (pi/2, 2) for parameter shift
- 返回
the grad function
- 返回类型
Callable[..., Tensor]
- tensorcircuit.experimental.parameter_shift_grad_v2(f: Callable[[...], Any], argnums: Union[int, Sequence[int]] = 0, jit: bool = False, random_argnums: Optional[Sequence[int]] = None, shifts: Tuple[float, float] = (1.5707963267948966, 2)) Callable[[...], Any] [源代码]#
similar to grad function but using parameter shift internally instead of AD, vmap is utilized for evaluation, v2 also supports random generator for finite measurememt shot, only jax backend is supported, since no vmap randomness is available in tensorflow
- 参数
f (Callable[..., Tensor]) -- quantum function with weights in and expectation out
argnums (Union[int, Sequence[int]], optional) -- label which args should be differentiated, defaults to 0
jit (bool, optional) -- whether jit the original function f at the beginning, defaults to False
shifts (Tuple[float, float]) -- two floats for the delta shift on the numerator and dominator, defaults to (pi/2, 2) for parameter shift
- 返回
the grad function
- 返回类型
Callable[..., Tensor]