108M Rows of Logs & IOT data | ClickHouse
A fast, simple way to explore machine-generated log data at scale.
This workbook uses the Brown University MgBench dataset, a public benchmark built to test analytics performance on high-volume operational logs. You’ll find millions of rows of raw machine metrics – CPU, memory, disk, network, and load stats – captured across multiple machines and groups. It’s ideal for showing how Astrato handles live-query workloads, fast filtering, and smooth interaction on dense time-series data. What’s inside: Machine groups and names CPU usage (idle, system, user, wio, nice) Memory and disk metrics Load averages (1/5/15 minute) Network bytes in/out Time-series views built directly from the raw MergeTree tables This workbook helps users explore: Trends over time Resource spikes Group comparisons Anomalies in machine performance Great for demos that highlight real operational telemetry, live queries, and how Astrato handles large datasets without extracts.
Sources:
- https://clickhouse.com/docs/getting-started/example-datasets/brown-benchmark
- https://clickpy-playground.clickhouse.com/?query_id=GEOICKHUJBUXMYROUXJK7C
