CosmoSolver#

class cosmolayer.CosmoSolver(*args, **kwargs)[source]#

COSMO self-consistent equation solver.

Solves the COSMO self-consistent equations for the logarithm of the activity coefficient vector, \(\ln \boldsymbol{\gamma}\), given the nonnegative probability distribution vector \(\mathbf{p}\) and the reduced interaction energy matrix \(\mathbf{U}/(RT)\).

The self-consistent equations are:

\[\boldsymbol{\gamma}\circ \left( \mathbf{B} ({\mathbf p} \circ \boldsymbol{\gamma}) \right) = t \mathbf{1},\]

where \(\mathbf{B} = \exp(-\mathbf{U}/(RT))\) is the matrix of Boltzmann factors, \(t=\mathbf{1}^T \mathbf{p}\) is the sum of the probabilities, and \(\circ\) represents an elementwise product.

The solution satisfies \(\boldsymbol{\gamma}^\mathsf{T} \mathbf{M} \boldsymbol{\gamma} = t\), where \(\mathbf{M} = \mathbf{B} \circ (\mathbf{p}\mathbf{p}^T)\).

Note

Supports batching, i.e., if \(\mathbf{p}\) and \(\mathbf{U}/(RT)\) can have broadcastable leading dimensions, all computations are performed in a single vectorized operation.

Parameters:
  • p (torch.Tensor) – Segment-type probability distribution vector. Must be nonnegative. Shape: (…, num_segment_types).

  • U_RT (torch.Tensor) – Reduced interaction energy matrix \(\mathbf{U}/(RT)\). Shape: (…, num_segment_types, num_segment_types).

  • max_iter (int) – Maximum number of iterations.

Returns:

log_gamma – The logarithm of the segment activity coefficient vector. Shape: (…, num_segment_types).

Return type:

torch.Tensor

Raises:

RuntimeError – If the fixed-point solver does not converge within max_iter iterations.

Examples

>>> import numpy as np
>>> from cosmolayer.cosmosac import Component, CosmoSac2002Model
>>> from importlib.resources import files
>>> cosmo_strings = [
...     (files("cosmolayer.data") / f"{species}.cosmo").read_text()
...     for species in ("C=C(N)O", "NCCO")
... ]
>>> probabilities = [
...     CosmoSac2002Model.create_component(cosmo_string).probabilities
...     for cosmo_string in cosmo_strings
... ]
>>> p = torch.stack(
...     [torch.tensor(prob, dtype=torch.float32) for prob in probabilities],
... ).requires_grad_(True)
>>> U_RT = torch.tensor(
...     CosmoSac2002Model.create_interaction_matrices(298.15)[0],
...     dtype=torch.float32,
...     requires_grad=True,
... )
>>> log_gamma, converged = CosmoSolver.apply(p, U_RT)
>>> converged.all().item()
True
>>> log_gamma
tensor([[-4.5...e+00, -4.0...e+00, ... -1.3...e+01],
        [-2.1...e+01, -1.9...e+01, ... -5.3...e+00]], grad_fn=<CosmoSolverBackward>)
>>> loss = (2 * log_gamma).exp().sum()
>>> loss.backward()
>>> p.grad
tensor([[ 2.1...e+02,  2.1...e+02, ... -7.4...e+05],
        [-6.6...e+02, -6.3...e+02, ...  7.4...e+02]])

Methods

static backward(ctx, grad_log_gamma, grad_converged)[source]#

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward will have ctx.needs_input_grad[0] = True if the first input to forward needs gradient computed w.r.t. the output.

Return type:

tuple[Tensor | None, Tensor | None, None]

static forward(ctx, p, U_RT, max_iter=100)[source]#

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass


@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See Extending torch.autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward if they are intended to be used for in jvp.

Return type:

tuple[Tensor, Tensor]