Skip to content

Commit 912c7c8

Browse files
authored
Upgrade ProblemReductions, improve documentation (#94)
* upgrade, improve documentation * fix ci * fix ci * fix tests * fix tests
1 parent c1fbf0c commit 912c7c8

27 files changed

+1136
-917
lines changed

.github/workflows/CI.yml

+6-6
Original file line numberDiff line numberDiff line change
@@ -36,12 +36,12 @@ jobs:
3636
arch:
3737
- x64
3838
steps:
39-
- uses: actions/checkout@v2
40-
- uses: julia-actions/setup-julia@v1
39+
- uses: actions/checkout@v4
40+
- uses: julia-actions/setup-julia@v2
4141
with:
4242
version: ${{ matrix.version }}
4343
arch: ${{ matrix.arch }}
44-
- uses: actions/cache@v1
44+
- uses: actions/cache@v4
4545
env:
4646
cache-name: cache-artifacts
4747
with:
@@ -54,16 +54,16 @@ jobs:
5454
- uses: julia-actions/julia-buildpkg@v1
5555
- uses: julia-actions/julia-runtest@v1
5656
- uses: julia-actions/julia-processcoverage@v1
57-
- uses: codecov/codecov-action@v1
57+
- uses: codecov/codecov-action@v5
5858
with:
5959
file: lcov.info
6060

6161
docs:
6262
name: Documentation
6363
runs-on: ubuntu-latest
6464
steps:
65-
- uses: actions/checkout@v2
66-
- uses: julia-actions/setup-julia@v1
65+
- uses: actions/checkout@v4
66+
- uses: julia-actions/setup-julia@v2
6767
with:
6868
version: '1'
6969
- run: |

Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ coverage:
1919
$(JL) -e 'using Pkg; Pkg.test("GenericTensorNetworks"; coverage=true)'
2020

2121
serve:
22-
$(JL) -e 'using Pkg; Pkg.activate("docs"); using LiveServer; servedocs(;skip_dirs=["docs/src/assets", "docs/src/generated"], literate_dir="examples")'
22+
$(JL) -e 'using Pkg; Pkg.activate("docs"); using LiveServer; servedocs(;skip_dirs=["docs/build", "docs/src/assets", "docs/src/generated"], literate_dir="examples")'
2323

2424
clean:
2525
rm -rf docs/build

Project.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ LuxorGraphPlot = "0.5"
4141
OMEinsum = "0.8"
4242
Polynomials = "4"
4343
Primes = "0.5"
44-
ProblemReductions = "0.2"
44+
ProblemReductions = "0.3"
4545
Random = "1"
4646
SIMDTypes = "0.1"
4747
Serialization = "1"

docs/src/index.md

+79-43
Original file line numberDiff line numberDiff line change
@@ -4,57 +4,93 @@ CurrentModule = GenericTensorNetworks
44

55
# GenericTensorNetworks
66

7-
This package implements generic tensor networks to compute *solution space properties* of a class of hard combinatorial problems.
8-
The *solution space properties* include
9-
* The maximum/minimum solution sizes,
10-
* The number of solutions at certain sizes,
11-
* The enumeration of solutions at certain sizes.
12-
* The direct sampling of solutions at certain sizes.
13-
14-
The solvable problems include [Independent set problem](@ref), [Maximal independent set problem](@ref), [Spin-glass problem](@ref), [Cutting problem](@ref), [Vertex matching problem](@ref), [Binary paint shop problem](@ref), [Coloring problem](@ref), [Dominating set problem](@ref), [Satisfiability problem](@ref), [Set packing problem](@ref) and [Set covering problem](@ref).
15-
16-
## Background knowledge
17-
18-
Please check our paper ["Computing properties of independent sets by generic programming tensor networks"](https://arxiv.org/abs/2205.03718).
19-
If you find our paper or software useful in your work, we would be grateful if you could cite our work. The [CITATION.bib](https://github.com/QuEraComputing/GenericTensorNetworks.jl/blob/master/CITATION.bib) file in the root of this repository lists the relevant papers.
20-
21-
## Quick start
22-
23-
You can find a set up guide in our [README](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
24-
To get started, open a Julia REPL and type the following code.
25-
26-
```@repl
27-
using GenericTensorNetworks, Graphs#, CUDA
28-
solve(
29-
GenericTensorNetwork(IndependentSet(
30-
Graphs.random_regular_graph(20, 3),
31-
UnitWeight(20)); # default: uniform weight 1
32-
optimizer = TreeSA(),
33-
openvertices = (), # default: no open vertices
34-
fixedvertices = Dict() # default: no fixed vertices
35-
),
36-
GraphPolynomial();
37-
usecuda=false # the default value
38-
)
7+
## Overview
8+
GenericTensorNetworks is a high-performance package that uses tensor network algorithms to solve challenging combinatorial optimization problems. This approach allows us to efficiently compute various solution space properties that would be intractable with traditional methods.
9+
10+
## Key Capabilities
11+
Our package can compute a wide range of solution space properties:
12+
13+
* Maximum and minimum solution sizes
14+
* Solution counts at specific sizes
15+
* Complete enumeration of solutions
16+
* Statistical sampling from the solution space
17+
18+
## Supported Problem Classes
19+
GenericTensorNetworks can solve many important combinatorial problems:
20+
21+
* [Independent Set Problem](@ref)
22+
* [Maximal Independent Set Problem](@ref)
23+
* [Spin-Glass Problem](@ref)
24+
* [Maximum Cut Problem](@ref)
25+
* [Vertex Matching Problem](@ref)
26+
* [Binary Paint Shop Problem](@ref)
27+
* [Graph Coloring Problem](@ref)
28+
* [Dominating Set Problem](@ref)
29+
* [Boolean Satisfiability Problem](@ref)
30+
* [Set Packing Problem](@ref)
31+
* [Set Covering Problem](@ref)
32+
33+
## Scientific Background
34+
For the theoretical foundation and algorithmic details, please refer to our paper:
35+
["Computing properties of independent sets by generic programming tensor networks"](https://arxiv.org/abs/2205.03718)
36+
37+
If you find our package useful in your research, please cite our work using the references in [CITATION.bib](https://github.com/QuEraComputing/GenericTensorNetworks.jl/blob/master/CITATION.bib).
38+
39+
## Getting Started
40+
41+
### Installation
42+
Installation instructions are available in our [README](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
43+
44+
### Basic Example
45+
Here's a simple example that computes the independence polynomial of a random regular graph:
46+
47+
```julia
48+
using GenericTensorNetworks, Graphs # Add CUDA for GPU acceleration
49+
50+
# Create and solve a problem instance
51+
result = solve(
52+
GenericTensorNetwork(
53+
IndependentSet(
54+
Graphs.random_regular_graph(20, 3), # Graph to analyze
55+
UnitWeight(20) # Uniform vertex weights
56+
);
57+
optimizer = TreeSA(), # Contraction order optimizer
58+
openvertices = (), # No open vertices
59+
fixedvertices = Dict() # No fixed vertices
60+
),
61+
GraphPolynomial(); # Property to compute
62+
usecuda = false # Use CPU (set true for GPU)
63+
)
3964
```
4065

41-
Here the main function [`solve`](@ref) takes three input arguments, the problem instance of type [`IndependentSet`](@ref), the property instance of type [`GraphPolynomial`](@ref) and an optional key word argument `usecuda` to decide use GPU or not.
42-
If one wants to use GPU to accelerate the computation, the `, CUDA` should be uncommented.
66+
### Understanding the API
67+
68+
The main function `solve` takes three components:
4369

44-
An [`IndependentSet`](@ref) instance takes two positional arguments to initialize, the graph instance that one wants to solve and the weights for each vertex. Here, we use a random regular graph with 20 vertices and degree 3, and the default uniform weight 1.
70+
1. **Problem Instance**: Created with `GenericTensorNetwork`, which wraps problem types like `IndependentSet`
71+
- The first argument defines the problem (graph and weights)
72+
- Optional arguments control the tensor network construction:
73+
- `optimizer`: Algorithm for finding efficient contraction orders
74+
- `openvertices`: Degrees of freedom to leave uncontracted
75+
- `fixedvertices`: Variables with fixed assignments
4576

46-
The [`GenericTensorNetwork`](@ref) function is a constructor for the problem instance, which takes the problem instance as the first argument and optional key word arguments. The key word argument `optimizer` is for specifying the tensor network optimization algorithm.
47-
The keyword argument `openvertices` is a tuple of labels for specifying the degrees of freedom not summed over, and `fixedvertices` is a label-value dictionary for specifying the fixed values of the degree of freedoms.
48-
Here, we use [`TreeSA`](@ref) method as the tensor network optimizer, and leave `openvertices` the default values.
49-
The [`TreeSA`](@ref) method finds the best contraction order in most of our applications, while the default [`GreedyMethod`](@ref) runs the fastest.
77+
2. **Property to Compute**: Such as `GraphPolynomial`, `SizeMax`, or `ConfigsAll`
78+
79+
3. **Computation Options**: Like `usecuda` to enable GPU acceleration
80+
81+
Note: The first execution will be slower due to Julia's just-in-time compilation. Subsequent runs will be much faster.
82+
83+
### API Structure
84+
The following diagram illustrates the possible combinations of inputs:
5085

51-
The first execution of this function will be a bit slow due to Julia's just in time compiling.
52-
The subsequent runs will be fast.
53-
The following diagram lists possible combinations of input arguments, where functions in the `Graph` are mainly defined in the package [Graphs](https://github.com/JuliaGraphs/Graphs.jl), and the rest can be found in this package.
5486
```@raw html
5587
<div align=center>
5688
<img src="assets/fig7.svg" width="75%"/>
5789
</div>
5890
```⠀
59-
You can find many examples in this documentation, a good one to start with is [Independent set problem](@ref).
6091
92+
Functions in the `Graph` box are primarily from the [Graphs](https://github.com/JuliaGraphs/Graphs.jl) package, while the rest are defined in GenericTensorNetworks.
93+
94+
## Next Steps
95+
For a deeper understanding, we recommend starting with the [Independent Set Problem](@ref) example, which demonstrates the core functionality of the package.
96+
```

0 commit comments

Comments
 (0)