CosmoSolver#
- class cosmolayer.CosmoSolver(*args, **kwargs)[source]#
COSMO self-consistent equation solver.
Solves the COSMO self-consistent equations for the logarithm of the activity coefficient vector, \(\ln \boldsymbol{\gamma}\), given the nonnegative probability distribution vector \(\mathbf{p}\) and the reduced interaction energy matrix \(\mathbf{U}/(RT)\).
The self-consistent equations are:
\[\boldsymbol{\gamma}\circ \left( \mathbf{B} ({\mathbf p} \circ \boldsymbol{\gamma}) \right) = t \mathbf{1},\]where \(\mathbf{B} = \exp(-\mathbf{U}/(RT))\) is the matrix of Boltzmann factors, \(t=\mathbf{1}^T \mathbf{p}\) is the sum of the probabilities, and \(\circ\) represents an elementwise product.
The solution satisfies \(\boldsymbol{\gamma}^\mathsf{T} \mathbf{M} \boldsymbol{\gamma} = t\), where \(\mathbf{M} = \mathbf{B} \circ (\mathbf{p}\mathbf{p}^T)\).
Note
Supports batching, i.e., if \(\mathbf{p}\) and \(\mathbf{U}/(RT)\) can have broadcastable leading dimensions, all computations are performed in a single vectorized operation.
- Parameters:
p (torch.Tensor) – Segment-type probability distribution vector. Must be nonnegative. Shape: (…, num_segment_types).
U_RT (torch.Tensor) – Reduced interaction energy matrix \(\mathbf{U}/(RT)\). Shape: (…, num_segment_types, num_segment_types).
max_iter (int) – Maximum number of iterations.
- Returns:
log_gamma – The logarithm of the segment activity coefficient vector. Shape: (…, num_segment_types).
- Return type:
- Raises:
RuntimeError – If the fixed-point solver does not converge within
max_iteriterations.
Examples
>>> import numpy as np >>> from cosmolayer.cosmosac import Component, CosmoSac2002Model >>> from importlib.resources import files >>> cosmo_strings = [ ... (files("cosmolayer.data") / f"{species}.cosmo").read_text() ... for species in ("C=C(N)O", "NCCO") ... ] >>> probabilities = [ ... CosmoSac2002Model.create_component(cosmo_string).probabilities ... for cosmo_string in cosmo_strings ... ] >>> p = torch.stack( ... [torch.tensor(prob, dtype=torch.float32) for prob in probabilities], ... ).requires_grad_(True) >>> U_RT = torch.tensor( ... CosmoSac2002Model.create_interaction_matrices(298.15)[0], ... dtype=torch.float32, ... requires_grad=True, ... ) >>> log_gamma, converged = CosmoSolver.apply(p, U_RT) >>> converged.all().item() True >>> log_gamma tensor([[-4.5...e+00, -4.0...e+00, ... -1.3...e+01], [-2.1...e+01, -1.9...e+01, ... -5.3...e+00]], grad_fn=<CosmoSolverBackward>) >>> loss = (2 * log_gamma).exp().sum() >>> loss.backward() >>> p.grad tensor([[ 2.1...e+02, 2.1...e+02, ... -7.4...e+05], [-6.6...e+02, -6.3...e+02, ... 7.4...e+02]])
Methods
- static backward(ctx, grad_log_gamma, grad_converged)[source]#
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforwardreturned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backwardwill havectx.needs_input_grad[0] = Trueif the first input toforwardneeds gradient computed w.r.t. the output.
- static forward(ctx, p, U_RT, max_iter=100)[source]#
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See Combined or separate forward() and setup_context() for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_contextstaticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See Extending torch.autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backwardif they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forwardif they are intended to be used for injvp.