MatrixBandwidth.jl

Metadata |
|
Documentation |
|
Continuous integration |
|
Code coverage |
|
Static analysis with |
|
Overview
MatrixBandwidth.jl offers fast algorithms for matrix bandwidth minimization and recognition. The bandwidth of an $n \times n$ matrix $A$ is the minimum non-negative integer $k \in \{0, 1, \ldots, n - 1\}$ such that $A_{i,j} = 0$ whenever $|i - j| > k$. Reordering the rows and columns of a matrix to reduce its bandwidth has many practical applications in engineering and scientific computing: it can improve performance when solving linear systems, approximating partial differential equations, optimizing circuit layout, and more. There are two variants of this problem: minimization, which involves finding a permutation matrix $P$ such that the bandwidth of $PAP^\mathsf{T}$ is minimized, and recognition, which entails determining whether there exists a permutation matrix $P$ such that the bandwidth of $PAP^\mathsf{T}$ is less than or equal to some fixed non-negative integer (an optimal permutation that fully minimizes the bandwidth of $A$ is not required).
Many matrix bandwidth reduction algorithms exist in the literature, but implementations in the open-source ecosystem are scarce, with those that do exist primarily tackling older, less efficient algorithms. The Boost libraries in C++, the NetworkX library in Python, and the MATLAB standard library all only implement the reverse Cuthill–McKee algorithm from 1971. This gap in the ecosystem not only makes it difficult for theoretical researchers to benchmark and compare new proposed algorithms but also precludes the application of the most performant modern algorithms in real-life industry settings. MatrixBandwidth.jl aims to bridge this gap by presenting a unified interface for matrix bandwidth reduction algorithms in Julia, designed with extensibility to further methods in mind.
Algorithms
The following algorithms are currently supported:
- Minimization
- Exact
- Del Corso–Manzini
- Del Corso–Manzini with perimeter search
- Caprara–Salazar-González
- Saxe–Gurari–Sudborough
- Brute-force search
- Heuristic
- Gibbs–Poole–Stockmeyer
- Cuthill–McKee
- Reverse Cuthill–McKee
- Exact
- Recognition
- Del Corso–Manzini
- Del Corso–Manzini with perimeter search
- Caprara–Salazar-González
- Saxe–Gurari–Sudborough
- Brute-force search
Recognition algorithms determine whether any row-and-column permutation of a matrix induces bandwidth less than or equal to some fixed integer. Exact minimization algorithms always guarantee optimal orderings to minimize bandwidth, while heuristic minimization algorithms produce near-optimal solutions more quickly. Metaheuristic minimization algorithms employ iterative search frameworks to find better solutions than heuristic methods (albeit more slowly); no such algorithms are already implemented, but several are currently under development:
- Greedy randomized adaptive search procedure (GRASP)
- Particle swarm optimization with hill climbing (PSO-HC)
- Simulated annealing
- Genetic algorithm
- Ant colony optimization
- Tabu search
An index of all available algorithms by submodule (not including the unfinished metaheuristic algorithms) may also be accessed via the MatrixBandwidth.ALGORITHMS
constant; simply run the following command in the Julia REPL:
julia> MatrixBandwidth.ALGORITHMS
Dict{Symbol, Union{Dict{Symbol}, Vector}} with 2 entries:
[...]
To extend the interface with a new matrix bandwidth minimization algorithm, define a new concrete subtype of AbstractSolver
(or of one of its abstract subtypes like MetaheuristicSolver
) then implement a corresponding Minimization._minimize_bandwidth_impl(::AbstractMatrix{Bool}, ::NewSolverType)
method. Similarly, to implement a new bandwidth recognition algorithm, define a new concrete subtype of AbstractDecider
then implement a corresponding Recognition._has_bandwidth_k_ordering_impl(::AbstractMatrix{Bool}, ::Integer, ::NewDeciderType)
method. Do not attempt to directly implement new minimize_bandwidth
or has_bandwidth_k_ordering
methods, as these functions contain common preprocessing logic independent of the specific algorithm used.
Installation
The only prerequisite is a working Julia installation (v1.10 or later). First, enter Pkg mode by typing ]
in the Julia REPL, then run the following command:
pkg> add MatrixBandwidth
Basic use
MatrixBandwidth.jl offers unified interfaces for both bandwidth minimization and bandwidth recognition via the minimize_bandwidth
and has_bandwidth_k_ordering
functions, respectively—the algorithm itself is specified as an argument. For example, to minimize the bandwidth of a random matrix with the reverse Cuthill–McKee algorithm, you can run the following code:
julia> using Random, SparseArrays
julia> Random.seed!(8675309);
julia> A = sprand(60, 60, 0.01); A = A + A' # Ensure structural symmetry
60×60 SparseMatrixCSC{Float64, Int64} with 93 stored entries:
⎡⠀⠀⠀⠀⠠⠀⠀⠀⠀⠀⠀⠀⠠⠀⢀⠀⠀⠒⠀⠀⠀⠀⠀⠀⡀⠨⠀⠀⠀⠀⎤
⎢⠀⠀⠀⠀⠅⠀⠀⠀⠀⠀⠐⠀⠠⡀⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⠠⠀⠀⠀⠀⎥
⎢⠀⠂⠁⠁⢀⠐⠀⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠠⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠁⠀⡀⠀⠀⠀⠀⠀⠀⠈⠁⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠄⡀⠀⠀⎥
⎢⠀⠂⠀⠢⠀⠀⠀⠀⠀⠀⠀⠄⠀⠀⠀⠀⢀⠀⠁⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠐⠀⠐⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠁⠀⠀⎥
⎢⢠⠀⠀⠀⠀⢀⠀⠠⢀⠀⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠊⠀⠀⢠⠀⠀⠀⠀⠀⠠⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⣀⠀⠀⠁⠀⠀⠀⠄⠀⎥
⎢⡀⡈⠈⡀⠀⠀⠆⠀⠀⠀⠁⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠡⠀⠀⠔⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡀⠀⠁⠀⠀⠀⠠⠄⠁⎦
julia> res_minimize = minimize_bandwidth(A, Minimization.ReverseCuthillMcKee())
Results of Bandwidth Minimization Algorithm
* Algorithm: Reverse Cuthill–McKee
* Approach: heuristic
* Minimum Bandwidth: 9
* Original Bandwidth: 51
* Matrix Size: 60×60
julia> A[res_minimize.ordering, res_minimize.ordering]
60×60 SparseMatrixCSC{Float64, Int64} with 93 stored entries:
⎡⠠⡢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
⎢⠀⠈⠠⡢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠠⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠠⡢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠢⡀⠈⠀⠂⠀⢠⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⠀⠀⠀⠀⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⠀⠀⠀⠣⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠢⣀⡸⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠪⡢⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢊⠐⢠⡀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠲⢄⡱⢀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠪⢂⡄⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠪⡢⎦
Similarly, to determine whether said matrix has bandwidth at most, say, 3 (not necessarily caring about the true minimum) via the Saxe–Gurari–Sudborough algorithm, you can run:
julia> res_recognize = has_bandwidth_k_ordering(A, 3, Recognition.SaxeGurariSudborough())
Results of Bandwidth Recognition Algorithm
* Algorithm: Saxe–Gurari–Sudborough
* Bandwidth Threshold k: 3
* Has Bandwidth ≤ k Ordering: true
* Original Bandwidth: 51
* Matrix Size: 60×60
julia> A[res_recognize.ordering, res_recognize.ordering]
60×60 SparseMatrixCSC{Float64, Int64} with 93 stored entries:
⎡⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠠⠂⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠘⠀⡠⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠐⠊⠀⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠀⠀⠠⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠢⡄⠉⡢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠪⢀⠔⢂⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠰⡊⠈⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠊⠀⠠⡀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠢⣀⡸⢀⡀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⡀⠈⢠⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠒⣀⡸⢂⡀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠰⡀⠈⠠⡀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠢⠊⡠⎦
(In this case, though, it turns out that 3 is the true minimum bandwidth of the matrix, as can be verified by running minimize_bandwidth
with any exact algorithm.)
If no algorithm is explicitly specified, minimize_bandwidth
defaults to the Gibbs–Poole–Stockmeyer algorithm:
julia> res_minimize_default = minimize_bandwidth(A)
Results of Bandwidth Minimization Algorithm
* Algorithm: Gibbs–Poole–Stockmeyer
* Approach: heuristic
* Minimum Bandwidth: 5
* Original Bandwidth: 51
* Matrix Size: 60×60
julia> A[res_minimize_default.ordering, res_minimize_default.ordering]
60×60 SparseMatrixCSC{Float64, Int64} with 93 stored entries:
⎡⠠⡢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
⎢⠀⠈⠠⡢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠠⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠠⡢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⠄⠁⠀⢢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⣀⣀⠘⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠡⠄⡡⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠢⣀⠘⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⠀⡠⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠪⢂⡄⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⢊⡰⢀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⢤⠓⣠⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠚⠠⡢⎦
(We default to Gibbs–Poole–Stockmeyer because it is one of the most accurate heuristic algorithms—note how in this case, it produced a lower-bandwidth ordering than reverse Cuthill–McKee. Of course, if true optimality is required, an exact algorithm should be used instead.)
has_bandwidth_k_ordering
similarly defaults to the Del Corso–Manzini algorithm:
julia> res_recognize_default = has_bandwidth_k_ordering(A, 6)
Results of Bandwidth Recognition Algorithm
* Algorithm: Del Corso–Manzini
* Bandwidth Threshold k: 6
* Has Bandwidth ≤ k Ordering: true
* Original Bandwidth: 51
* Matrix Size: 60×60
julia> A[res_recognize_default.ordering, res_recognize_default.ordering]
60×60 SparseMatrixCSC{Float64, Int64} with 93 stored entries:
⎡⠀⠀⠄⠑⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
⎢⢄⠁⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⢄⠀⠀⠀⠀⠠⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠑⠀⡀⠀⠀⠌⠑⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠈⢆⠁⠀⠀⠀⠲⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠘⢠⡀⠐⠀⠀⡁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠄⠠⠀⠀⠐⢂⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⢀⡀⠈⠀⡐⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢀⠠⠀⠀⠀⣄⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠀⢤⠀⠀⠀⠔⠄⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⢀⠄⠀⠀⠠⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠂⠀⠀⠀⠁⢀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠄⠀⠠⠂⢀⠑⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⢄⠐⠀⠀⠀⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎦
Complementing our various bandwidth minimization and recognition algorithms, MatrixBandwidth.jl exports several additional core functions, including (but not limited to) bandwidth
and profile
to compute the original bandwidth and profile of a matrix:
julia> using Random, SparseArrays
julia> Random.seed!(1234);
julia> A = sprand(50, 50, 0.02)
50×50 SparseMatrixCSC{Float64, Int64} with 49 stored entries:
⎡⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠀⠀⠀⎤
⎢⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠂⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠀⠀⡀⠀⡀⠀⠀⠀⠄⠀⠀⠀⠄⠀⎥
⎢⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠄⠂⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠈⠀⠀⠀⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⢀⠀⠀⠀⠂⠀⠀⠀⠁⎥
⎢⡀⡀⢀⠄⠀⠁⠄⢀⠀⠀⠀⠀⠀⢀⠀⠀⠠⠀⠀⠀⠀⣀⠀⠀⠀⎥
⎢⠀⢀⠀⠀⠀⠊⠀⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠄⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⠀⠀⠀⠐⠀⠀⠀⠀⠀⠀⎥
⎢⠈⠀⢀⠀⠀⠀⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⎥
⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎥
⎢⠀⠀⠀⠀⠂⠀⠀⠀⠀⠀⠀⠀⠀⠨⠐⠀⠀⢀⠀⠀⠀⠀⠀⠀⠀⎥
⎣⠀⠀⠀⠀⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎦
julia> bandwidth(A) # Bandwidth prior to any reordering of rows and columns
38
julia> profile(A) # Profile prior to any reordering of rows and columns
703
(Closely related to bandwidth, the column profile of a matrix is the sum of the distances from each diagonal entry to the farthest nonzero entry in that column, whereas the row profile is the sum of the distances from each diagonal entry to the farthest nonzero entry in that row. profile(A)
computes the column profile of A
by default, but it can also be used to compute the row profile.)
Documentation
The full documentation is available at GitHub Pages. Documentation for methods and types is also available via the Julia REPL—for instance, to learn more about the minimize_bandwidth
function, enter help mode by typing ?
, then run the following command:
help?> minimize_bandwidth
search: minimize_bandwidth bandwidth MatrixBandwidth
minimize_bandwidth(A, solver=GibbsPooleStockmeyer()) -> MinimizationResult
Minimize the bandwidth of A using the algorithm defined by solver.
The bandwidth of an n×n matrix A is the minimum non-negative integer k ∈
\{0, 1, …, n - 1\} such that A[i, j] = 0 whenever |i - j| > k.
This function computes a (near-)optimal ordering π of the rows and columns
of A so that the bandwidth of PAPᵀ is minimized, where P is the permutation
matrix corresponding to π. This is known to be an NP-complete problem;
however, several heuristic algorithms such as Gibbs–Poole–Stockmeyer run in
polynomial time while still still producing near-optimal orderings in
practice. Exact methods like Caprara–Salazar-González are also available,
but they are at least exponential in time complexity and thus only feasible
for relatively small matrices.
Arguments
≡≡≡≡≡≡≡≡≡
[...]
Citing
I encourage you to cite this work if you have found any of the algorithms herein useful for your research. Starring the MatrixBandwidth.jl repository on GitHub is also appreciated.
The latest citation information may be found in the CITATION.bib file within the repository.
Project status
The latest stable release of MatrixBandwidth.jl is v0.2.1. Although several metaheuristic algorithms are still under development, the rest of the package is fully functional and covered by unit tests.
Currently, MatrixBandwidth.jl's core functions generically accept any input of the type AbstractMatrix{<:Number}
, not behaving any differently when given sparsely stored matrices (e.g., from the SparseArrays.jl standard library package). Capabilities for directly handling graph inputs (aiming to reduce the matrix bandwidth of a graph's adjacency) are also not available. Given that bandwidth reduction is often applied to sparse matrices and graphs, these limitations will be addressed in a future release of the package.
Index
MatrixBandwidth
MatrixBandwidth.MatrixBandwidth
MatrixBandwidth.Minimization
MatrixBandwidth.Recognition
MatrixBandwidth.ALGORITHMS
MatrixBandwidth.AbstractAlgorithm
MatrixBandwidth.AbstractResult
MatrixBandwidth.NotImplementedError
MatrixBandwidth.RectangularMatrixError
MatrixBandwidth.StructuralAsymmetryError
MatrixBandwidth.bandwidth
MatrixBandwidth.bandwidth_lower_bound
MatrixBandwidth.connected_components
MatrixBandwidth.find_direct_subtype
MatrixBandwidth.floyd_warshall_shortest_paths
MatrixBandwidth.is_structurally_symmetric
MatrixBandwidth.offdiag_nz_support
MatrixBandwidth.profile
MatrixBandwidth.random_banded_matrix
MatrixBandwidth.Minimization
MatrixBandwidth.Minimization.Exact
MatrixBandwidth.Minimization.Heuristic
MatrixBandwidth.Minimization.Metaheuristic
MatrixBandwidth.Minimization.AbstractSolver
MatrixBandwidth.Minimization.MinimizationResult
MatrixBandwidth.Minimization.minimize_bandwidth
MatrixBandwidth.Minimization.Exact
MatrixBandwidth.Minimization.Exact.BruteForceSearch
MatrixBandwidth.Minimization.Exact.CapraraSalazarGonzalez
MatrixBandwidth.Minimization.Exact.DelCorsoManzini
MatrixBandwidth.Minimization.Exact.DelCorsoManziniWithPS
MatrixBandwidth.Minimization.Exact.ExactSolver
MatrixBandwidth.Minimization.Exact.SaxeGurariSudborough
MatrixBandwidth.Minimization.Heuristic
MatrixBandwidth.Minimization.Heuristic.CuthillMcKee
MatrixBandwidth.Minimization.Heuristic.GibbsPooleStockmeyer
MatrixBandwidth.Minimization.Heuristic.HeuristicSolver
MatrixBandwidth.Minimization.Heuristic.ReverseCuthillMcKee
MatrixBandwidth.Minimization.Heuristic.bi_criteria_node_finder
MatrixBandwidth.Minimization.Heuristic.pseudo_peripheral_node_finder
MatrixBandwidth.Minimization.Metaheuristic
MatrixBandwidth.Minimization.Metaheuristic.AntColony
MatrixBandwidth.Minimization.Metaheuristic.GRASP
MatrixBandwidth.Minimization.Metaheuristic.GeneticAlgorithm
MatrixBandwidth.Minimization.Metaheuristic.MetaheuristicSolver
MatrixBandwidth.Minimization.Metaheuristic.PSOHC
MatrixBandwidth.Minimization.Metaheuristic.SimulatedAnnealing
MatrixBandwidth.Minimization.Metaheuristic.TabuSearch
MatrixBandwidth.Recognition
MatrixBandwidth.Recognition.AbstractDecider
MatrixBandwidth.Recognition.BruteForceSearch
MatrixBandwidth.Recognition.CapraraSalazarGonzalez
MatrixBandwidth.Recognition.DelCorsoManzini
MatrixBandwidth.Recognition.DelCorsoManziniWithPS
MatrixBandwidth.Recognition.RecognitionResult
MatrixBandwidth.Recognition.SaxeGurariSudborough
MatrixBandwidth.Recognition.dcm_ps_optimal_depth
MatrixBandwidth.Recognition.has_bandwidth_k_ordering