Academic Journal

Timely Reporting of Heavy Hitters Using External Memory

التفاصيل البيبلوغرافية
العنوان: Timely Reporting of Heavy Hitters Using External Memory
المؤلفون: Singh, Shikha, Pandey, Prashant, Bender, Michael A., Berry, Jonathan W., Farach-Colton, Martín, Johnson, Rob, Kroeger, Thomas M., Phillips, Cynthia A.
المساهمون: NSF, Laboratory-Directed Research-and-Development program at Sandia National Laboratories, National Technology and Engineering Solutions of Sandia, LLC., Honeywell International, Inc., U.S. Department of Energy’s National Nuclear Security Administration, U.S. Department of Energy or the United States Government, Advanced Scientific Computing Research, Office of Science of the DOE, NERSC, Exascale Computing Project, U.S. Department of Energy Office of Science and the National Nuclear Security Administration
المصدر: ACM Transactions on Database Systems ; volume 46, issue 4, page 1-35 ; ISSN 0362-5915 1557-4644
بيانات النشر: Association for Computing Machinery (ACM)
سنة النشر: 2021
الوصف: Given an input stream S of size N , a ɸ-heavy hitter is an item that occurs at least ɸN times in S . The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = ɸ N-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection ( TED ) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (Ω (N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device’s random I/O throughput, i.e., ≈100K observations per second.
نوع الوثيقة: article in journal/newspaper
اللغة: English
DOI: 10.1145/3472392
الاتاحة: http://dx.doi.org/10.1145/3472392
https://dl.acm.org/doi/pdf/10.1145/3472392
Rights: http://www.acm.org/publications/policies/copyright_policy#Background
رقم الانضمام: edsbas.F2436F11
قاعدة البيانات: BASE