The SCM is a statistical ensemble of random graphs G {\displaystyle G} having n {\displaystyle n} vertices ( n = | V ( G ) | {\displaystyle n=|V(G)|} ) labeled { v j } j = 1 n = V ( G ) {\displaystyle \{v_{j}\}_{j=1}^{n}=V(G)} , producing a probability distribution on G n {\displaystyle {\mathcal {G}}_{n}} (the set of graphs of size n {\displaystyle n} ). Imposed on the ensemble are n {\displaystyle n} constraints, namely that the ensemble average of the degree k j {\displaystyle k_{j}} of vertex v j {\displaystyle v_{j}} is equal to a designated value k ^ j {\displaystyle {\widehat {k}}_{j}} , for all v j ∈ V ( G ) {\displaystyle v_{j}\in V(G)} . The model is fully parameterized by its size n {\displaystyle n} and expected degree sequence { k ^ j } j = 1 n {\displaystyle \{{\widehat {k}}_{j}\}_{j=1}^{n}} . These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints.3 The conditions ⟨ k j ⟩ = k ^ j {\displaystyle \langle k_{j}\rangle ={\widehat {k}}_{j}} are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model).
The probability P SCM ( G ) {\displaystyle \mathbb {P} _{\text{SCM}}(G)} of the SCM producing a graph G {\displaystyle G} is determined by maximizing the Gibbs entropy S [ G ] {\displaystyle S[G]} subject to constraints ⟨ k j ⟩ = k ^ j , j = 1 , … , n {\displaystyle \langle k_{j}\rangle ={\widehat {k}}_{j},\ j=1,\ldots ,n} and normalization ∑ G ∈ G n P SCM ( G ) = 1 {\displaystyle \sum _{G\in {\mathcal {G}}_{n}}\mathbb {P} _{\text{SCM}}(G)=1} . This amounts to optimizing the multi-constraint Lagrange function below:
where α {\displaystyle \alpha } and { ψ j } j = 1 n {\displaystyle \{\psi _{j}\}_{j=1}^{n}} are the n + 1 {\displaystyle n+1} multipliers to be fixed by the n + 1 {\displaystyle n+1} constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to P SCM ( G ) {\displaystyle \mathbb {P} _{\text{SCM}}(G)} for an arbitrary G ∈ G n {\displaystyle G\in {\mathcal {G}}_{n}} yields
the constant Z := e α + 1 = ∑ G ∈ G n exp [ − ∑ j = 1 n ψ j k j ( G ) ] = ∏ 1 ≤ i < j ≤ n ( 1 + e − ( ψ i + ψ j ) ) {\displaystyle Z:=e^{\alpha +1}=\sum _{G\in {\mathcal {G}}_{n}}\exp \left[-\sum _{j=1}^{n}\psi _{j}k_{j}(G)\right]=\prod _{1\leq i<j\leq n}\left(1+e^{-(\psi _{i}+\psi _{j})}\right)} 4 being the partition function normalizing the distribution; the above exponential expression applies to all G ∈ G n {\displaystyle G\in {\mathcal {G}}_{n}} , and thus is the probability distribution. Hence we have an exponential family parameterized by { ψ j } j = 1 n {\displaystyle \{\psi _{j}\}_{j=1}^{n}} , which are related to the expected degree sequence { k ^ j } j = 1 n {\displaystyle \{{\widehat {k}}_{j}\}_{j=1}^{n}} by the following equivalent expressions:
van der Hoorn, Pim; Gabor Lippner; Dmitri Krioukov (2017-10-10). "Sparse Maximum-Entropy Random Graphs with a Given Power-Law Degree Distribution". arXiv:1705.10261. /wiki/ArXiv_(identifier) ↩
Garlaschelli, Diego; Frank den Hollander; Andrea Roccaverde (January 30, 2018). "Coviariance structure behind breaking of ensemble equivalence in random graphs" (PDF). Archived (PDF) from the original on February 4, 2023. Retrieved September 14, 2018. http://eprints.imtlucca.it/4040/1/1711.04273.pdf ↩
Park, Juyong; M.E.J. Newman (2004-05-25). "The statistical mechanics of networks". arXiv:cond-mat/0405566. /wiki/ArXiv_(identifier) ↩