Cannabis Ruderalis

Content deleted Content added
Hatnote is improper and of no value here; title is already unambiguous. See WP:NAMB.
Ylloh (talk | contribs)
m added some references
Tag: 2017 wikitext editor
 
(27 intermediate revisions by 14 users not shown)
Line 1: Line 1:
{{short description|A mixing property of Markov chains and graphs}}
In [[graph theory]] the '''conductance''' of a [[Graph (discrete mathematics)|graph]] ''G''=(''V'',''E'') measures how "well-knit" the graph is: it controls how fast a [[random walk]] on ''G'' converges to a [[Uniform distribution (discrete)|uniform distribution]]. The conductance of a graph is often called the [[Cheeger constant (graph theory)|Cheeger constant]] of a graph as the
{{Network Science}}
analog of its [[Cheeger constant|counterpart]] in [[spectral geometry]].{{fact|date=May 2010}} Since [[electrical networks]] are intimately related to [[random walk]]s
[[File:Graph conductance.svg|thumb|An undirected graph {{mvar|G}} and a few example cuts with the corresponding conductances]]
with a long history in the usage of the term "conductance", this alternative name helps avoid possible confusion.


In [[theoretical computer science]], [[graph theory]], and [[mathematics]], the '''conductance''' is a parameter of a [[Markov chain]] that is closely tied to its [[Markov chain mixing time|mixing time]], that is, how rapidly the chain converges to its [[Markov chain#Steady-state analysis and limiting distributions|stationary distribution]], should it exist. Equivalently, the conductance can be viewed as a parameter of a directed [[Graph (discrete mathematics)|graph]], in which case it can be used to analyze how quickly [[random walk|random walks]] in the graph converge.
The conductance of a [[Cut (graph theory)|cut]] <math>(S, \bar S)</math> in a graph is defined as:


The conductance of a graph is closely related to the [[Cheeger constant (graph theory)|Cheeger constant]] of the graph, which is also known as the [[Expander graph#Edge expansion|edge expansion]] or the isoperimetic number. However, due to subtly different definitions, the conductance and the edge expansion do not generally coincide if the graphs are not [[Regular graph|regular]]. On the other hand, the notion of [[electrical conductance]] that appears in [[electrical networks]] is unrelated to the conductance of a graph.
:<math>\varphi(S) = \frac{\sum_{i \in S, j \in \bar S}a_{ij}}{\min(a(S),a(\bar S))}</math>


== History ==
where the <math>a_{ij}</math> are the entries of the [[adjacency matrix]] for ''G'', so that


The conductance was first defined by [[Mark Jerrum]] and [[Alistair Sinclair]] in 1988 to prove that the [[Permanent (mathematics)|permanent]] of a matrix with entries from {{math|1={0,1<nowiki>}</nowiki>}} has a [[polynomial-time approximation scheme]].{{sfn | Jerrum | Sinclair | 1988 | pp=235–244}} In the proof, Jerrum and Sinclair studied the Markov chain that switches between perfect and near-perfect [[Matching_(graph_theory)|matchings]] in [[bipartite graph]]s by adding or removing individual edges. They defined and used the conductance to prove that this Markov chain is [[Markov chain mixing time|rapidly mixing]]. This means that, after running the Markov chain for a polynomial number of steps, the resulting distribution is guaranteed to be close to the stationary distribution, which in this case is the uniform distribution on the set of all perfect and near-perfect matchings. This rapidly mixing Markov chain makes it possible in polynomial time to draw approximately uniform random samples from the set of all perfect matchings in the bipartite graph, which in turn gives rise to the [[polynomial-time approximation scheme]] for computing the permanent.
:<math>a(S) = \sum_{i \in S} \sum_{j \in V} a_{ij}</math>


== Definition ==
is the total number (or weight) of the edges incident with ''S''.
For undirected {{mvar|d}}-regular graphs <math>G</math> without edge weights, the conductance <math>\varphi(G)</math> is equal to the [[Cheeger constant (graph theory)|Cheeger constant]] <math>h(G)</math> divided by {{mvar|d}}, that is, we have <math>\varphi(G) = h(G) / d</math>.


More generally, let <math>G</math> be a directed graph with <math>n</math> vertices, vertex set <math>V</math>, edge set <math>E</math>, and real weights <math>a_{ij} \geq 0</math> on each edge <math>ij\in E</math>. Let <math>S\subseteq V</math> be any vertex subset. The conductance <math>\varphi(S)</math> of the [[Cut (graph theory)|cut]] <math>(S, \bar S)</math> is defined via<math display="block">\varphi(S) = \frac{\displaystyle a(S,\bar S)}{\min(\mathrm{vol}(S),\mathrm{vol}(\bar S))}\,,</math>where<math display="block">a(S,T) = \sum_{i \in S} \sum_{j \in T} a_{ij}\,,</math>and so <math>a(S,\bar S)</math> is the total weight of all edges that are crossing the cut from <math>S</math> to <math>\bar S</math> and<math display="block">\mathrm{vol}(S) = a(S,V)= \sum_{i \in S} \sum_{j \in V} a_{ij}</math>is the ''volume'' of <math>S</math>, that is, the total weight of all edges that start at <math>S</math>. If <math>\mathrm{vol}(S)</math> equals <math>0</math>, then <math>a(S,\bar S)</math> also equals <math>0</math> and <math>\varphi(S)</math> is defined as <math>1</math>.
The conductance of the whole graph is the minimum conductance over all the possible cuts:


The '''conductance''' <math>\varphi(G)</math> of the graph <math>G</math> is now defined as the minimum conductance over all possible cuts:<math display="block">\varphi(G) = \min_{S \subseteq V}\varphi(S).</math>Equivalently, the conductance satisfies<math display="block">\varphi(G) = \min\left\{\frac{a(S,\bar S)}{\mathrm{vol}(S)}\;\colon\; {\mathrm{vol}(S)\leq \frac{\mathrm{vol}(V)}{2}}\right\}\,.</math>
: <math>\phi(G) = \min_{S \subseteq V}\varphi(S).</math>

Equivalently, conductance of a graph is defined as follows:

: <math>\phi(G) := \min_{S \subseteq V; 0\leq a(S)\leq a(V)/2}\frac{\sum_{i \in S, j \in \bar S}a_{ij}}{a(S)}.\,</math>

For a ''d''-regular graph, the conductance is equal to the [[isoperimetric number]] divided by ''d''.


==Generalizations and applications==
==Generalizations and applications==
Line 28: Line 23:
The notion of conductance underpins the study of [[percolation]] in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes.
The notion of conductance underpins the study of [[percolation]] in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes.


Conductance also helps measure the quality of a [[Spectral clustering]]. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster(which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster(called "internal conductance") can be used as well.
Conductance also helps measure the quality of a [[Spectral clustering]]. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster (which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster (called "internal conductance") can be used as well.


==Markov chains==
==Markov chains==


For an [[ergodic]] reversible [[Markov chain]] with an underlying [[Graph (data structure)|graph]] ''G'', the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets <math>S</math> of the [[flow network|capacity]] of <math>S</math> divided by the [[ergodic flow]] out of <math>S</math>. [[Alistair Sinclair]] showed that conductance is closely tied to [[mixing time]] in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the minimal probability of leaving a small set of nodes given that we started in that set to begin with. Writing <math>\Phi_S</math> for the conditional probability of leaving a set of nodes S given that we were in that set to begin with, the conductance is the minimal <math>\Phi_S</math> over sets <math>S</math> that have a total stationary probability of at most 1/2.
For an [[ergodic]] reversible [[Markov chain]] with an underlying [[Graph (data structure)|graph]] ''G'', the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets <math>S</math> of the [[flow network|capacity]] of <math>S</math> divided by the [[ergodic flow]] out of <math>S</math>. [[Alistair Sinclair]] showed that conductance is closely tied to [[markov chain mixing time|mixing time]] in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the probability of leaving a set of nodes given that we started in that set to begin with. This may also be written as


: <math>\Phi = \min_{S \subseteq V, 0 < \pi(S) \leq \frac{1}{2}}\Phi_S = \min_{S \subseteq V, 0 < \pi(S) \leq \frac{1}{2}}\frac{\sum_{x \in S, y \in \bar S} \pi(x) P(x,y)}{\pi(S)}, </math>
Conductance is related to [[Markov chain mixing time]] in the reversible setting.
where <math>\pi</math> is the stationary distribution of the chain. In some literature, this quantity is also called the bottleneck ratio of ''G''.

Conductance is related to [[Markov chain mixing time]] in the reversible setting. Precisely, for any irreducible, reversible Markov Chain with self loop probabilities <math> P(y,y) \geq 1/2</math> for all states <math>y</math> and an initial state <math>x \in \Omega</math>,

: <math>\frac{1}{4 \Phi} \leq \tau_x(\delta) \leq \frac{2}{\Phi^2} \big( \ln \pi(x)^{-1} + \ln \delta^{-1} \big) </math>.


== See also ==
== See also ==
* [[Resistance distance]]
* [[Resistance distance]]
* [[Percolation]]
* [[Percolation theory]]
* [[Krackhardt E/I Ratio]]
* [[Krackhardt E/I Ratio]]

== Notes ==

{{reflist}}


== References ==
== References ==
* {{cite book | author=Béla Bollobás | authorlink=Béla Bollobás | title=Modern graph theory | series=[[Graduate Texts in Mathematics|GTM]] | volume=184 | publisher=[[Springer-Verlag]] | date=1998 | isbn=0-387-98488-7 | page=321 }}
* {{cite journal | url=http://www.cc.gatech.edu/~vempala/papers/jacm-spectral.pdf | author=Kannan, R. |author2=Vempala, S. |author3=Vetta, A. | date=May 2004 | title=On clusterings: Good, bad and spectral | journal=J. ACM |volume= 51 | issue =3 | pages=497–515 | doi = 10.1145/990308.990313 }}
*{{cite book | author=Fan Chung | authorlink = Fan Chung | title=Spectral Graph Theory | series=CBMS Lecture Notes | volume=92 | publisher=AMS Publications | date=1997 | isbn=0-8218-0315-8 | page=212}}
* A. Sinclair. Algorithms for Random Generation and Counting: A Markov Chain Approach. Birkhauser, Boston-Basel-Berlin, 1993.
* D. Levin, Y. Peres, E. L. Wilmer: Markov Chains and Mixing Times


{{refbegin}}
[[Category:Probability theory]]
* {{cite conference | last=Jerrum | first=Mark | authorlink=Mark Jerrum | last2=Sinclair | first2=Alistair | authorlink2=Alistair Sinclair | title=Conductance and the rapid mixing property for Markov chains: the approximation of permanent resolved | publisher=ACM Press | date=1988 | isbn=978-0-89791-264-8 | doi=10.1145/62212.62234}}
* {{cite book | last=Béla | first=Bollobás | authorlink=Béla Bollobás | title=Modern graph theory | series=[[Graduate Texts in Mathematics|GTM]] | volume=184 | publisher=[[Springer-Verlag]] | date=1998 | isbn=0-387-98488-7 | page=321 }}
* {{cite journal | last=Kannan | first=Ravi | last2=Vempala | first2=Santosh | last3=Vetta | first3=Adrian | title=On clusterings: Good, bad and spectral | journal=Journal of the ACM | volume=51 | issue=3 | date=2004 | issn=0004-5411 | doi=10.1145/990308.990313 | pages=497–515}}
* {{cite book | last=Chung | first=Fan R. K. | title=Spectral Graph Theory | publisher=American Mathematical Soc. | publication-place=Providence (R. I.) | date=1997 | isbn=0-8218-0315-8}}
* {{cite book | last=Sinclair | first=Alistair | authorlink=Alistair Sinclair | title=Algorithms for Random Generation and Counting: A Markov Chain Approach | publisher=Birkhäuser Boston | publication-place=Boston, MA | date=1993 | isbn=978-1-4612-6707-2 | doi=10.1007/978-1-4612-0323-0}}
* {{cite book |last=Levin |first=David A. |last2=Peres |first2=Yuval |title=[[Markov Chains and Mixing Times]]: Second Edition |publisher=American Mathematical Soc. |publication-place=Providence, Rhode Island |date=2017-10-31 |isbn=978-1-4704-2962-1}}
* {{cite book | last=Cheeger | first=Jeff | title=Problems in Analysis: A Symposium in Honor of Salomon Bochner (PMS-31) | chapter=A Lower Bound for the Smallest Eigenvalue of the Laplacian | publisher=Princeton University Press | year=1971 | pages=195–200 | isbn=978-1-4008-6931-2 | doi=10.1515/9781400869312-013}}
* {{cite journal | last=Diaconis | first=Persi | last2=Stroock | first2=Daniel | title=Geometric Bounds for Eigenvalues of Markov Chains | journal=The Annals of Applied Probability | publisher=Institute of Mathematical Statistics | volume=1 | issue=1 | year=1991 | issn=10505164 | jstor=2959624 | pages=36–61 | url=http://www.jstor.org/stable/2959624 | access-date=2024-04-14}}
{{refend}}

[[Category:Markov processes]]
[[Category:Algebraic graph theory]]
[[Category:Algebraic graph theory]]
[[Category:Matrices]]
[[Category:Matrices]]

Latest revision as of 20:20, 14 April 2024

An undirected graph G and a few example cuts with the corresponding conductances

In theoretical computer science, graph theory, and mathematics, the conductance is a parameter of a Markov chain that is closely tied to its mixing time, that is, how rapidly the chain converges to its stationary distribution, should it exist. Equivalently, the conductance can be viewed as a parameter of a directed graph, in which case it can be used to analyze how quickly random walks in the graph converge.

The conductance of a graph is closely related to the Cheeger constant of the graph, which is also known as the edge expansion or the isoperimetic number. However, due to subtly different definitions, the conductance and the edge expansion do not generally coincide if the graphs are not regular. On the other hand, the notion of electrical conductance that appears in electrical networks is unrelated to the conductance of a graph.

History[edit]

The conductance was first defined by Mark Jerrum and Alistair Sinclair in 1988 to prove that the permanent of a matrix with entries from {0,1} has a polynomial-time approximation scheme.[1] In the proof, Jerrum and Sinclair studied the Markov chain that switches between perfect and near-perfect matchings in bipartite graphs by adding or removing individual edges. They defined and used the conductance to prove that this Markov chain is rapidly mixing. This means that, after running the Markov chain for a polynomial number of steps, the resulting distribution is guaranteed to be close to the stationary distribution, which in this case is the uniform distribution on the set of all perfect and near-perfect matchings. This rapidly mixing Markov chain makes it possible in polynomial time to draw approximately uniform random samples from the set of all perfect matchings in the bipartite graph, which in turn gives rise to the polynomial-time approximation scheme for computing the permanent.

Definition[edit]

For undirected d-regular graphs without edge weights, the conductance is equal to the Cheeger constant divided by d, that is, we have .

More generally, let be a directed graph with vertices, vertex set , edge set , and real weights on each edge . Let be any vertex subset. The conductance of the cut is defined via

where
and so is the total weight of all edges that are crossing the cut from to and
is the volume of , that is, the total weight of all edges that start at . If equals , then also equals and is defined as .

The conductance of the graph is now defined as the minimum conductance over all possible cuts:

Equivalently, the conductance satisfies

Generalizations and applications[edit]

In practical applications, one often considers the conductance only over a cut. A common generalization of conductance is to handle the case of weights assigned to the edges: then the weights are added; if the weight is in the form of a resistance, then the reciprocal weights are added.

The notion of conductance underpins the study of percolation in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes.

Conductance also helps measure the quality of a Spectral clustering. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster (which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster (called "internal conductance") can be used as well.

Markov chains[edit]

For an ergodic reversible Markov chain with an underlying graph G, the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets of the capacity of divided by the ergodic flow out of . Alistair Sinclair showed that conductance is closely tied to mixing time in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the probability of leaving a set of nodes given that we started in that set to begin with. This may also be written as

where is the stationary distribution of the chain. In some literature, this quantity is also called the bottleneck ratio of G.

Conductance is related to Markov chain mixing time in the reversible setting. Precisely, for any irreducible, reversible Markov Chain with self loop probabilities for all states and an initial state ,

.

See also[edit]

Notes[edit]

  1. ^ Jerrum & Sinclair 1988, pp. 235–244.

References[edit]

Leave a Reply