🤖 AI Summary
This paper investigates the computational complexity of detecting minimal hypercycles and answering subgraph queries in hypergraphs and databases, unifying worst-case and average-case hardness analyses. Methodologically, it introduces the first worst-case-to-average-case reduction framework for hypergraph substructure detection; establishes tight (matching) upper and lower bounds for minimal hypercycle detection; designs improved algorithms for detecting long hypercycles; and proves—novelty—the existence of a strict average-case lower bound for hypercycle counting and its equivalent database query problem on random hypergraphs, thereby establishing their average-case hardness. The study integrates combinatorial complexity analysis, hypergraph algorithm design, and probabilistic modeling. Its contributions provide foundational complexity characterizations for hypergraph theory and database query optimization, bridging structural hypergraph properties with practical query evaluation complexity.
📝 Abstract
In this paper we present tight lower-bounds and new upper-bounds for hypergraph and database problems. We give tight lower-bounds for finding minimum hypercycles. We give tight lower-bounds for a substantial regime of unweighted hypercycle. We also give a new faster algorithm for longer unweighted hypercycles. We give a worst-case to average-case reduction from detecting a subgraph of a hypergraph in the worst-case to counting subgraphs of hypergraphs in the average-case. We demonstrate two applications of this worst-case to average-case reduction, which result in average-case lower bounds for counting hypercycles in random hypergraphs and queries in average-case databases. Our tight upper and lower bounds for hypercycle detection in the worst-case have immediate implications for the average-case via our worst-case to average-case reductions.