We have hosted the application deequ in order to run this application in our online workstations with Wine or directly.
Quick description about deequ:
Deequ is a library built atop Apache Spark that enables defining “unit tests for data” — that is, formal constraints or checks on datasets to ensure data quality along dimensions such as completeness, uniqueness, value ranges, correlations, etc. It can scale to large datasets (billions of rows) by translating those data checks into Spark jobs. Deequ supports advanced features like a metrics repository for storing computed statistics over time, anomaly detection of data quality metrics, and the suggestion of likely constraints automatically for new datasets. It also includes a little domain-specific language called DQDL (Data Quality Definition Language) which allows declarative specification of quality rules. Users typically run Deequ before feeding data downstream (to ML pipelines, analytics, or production systems), enabling early detection and isolation of data errors. There is also a Python wrapper, PyDeequ, for users who prefer working from Python environments.Features:
- Metrics computation for large datasets: completeness, min/max, uniqueness, correlation etc using Spark aggregations
- Constraint definition and verification: developers can define data quality constraints and have Deequ check whether the data satisfies them
- Constraint suggestion / profiling: ability to profile data and suggest likely useful constraints automatically
- Anomaly detection / drift monitoring across data runs / versions so changes/unexpected data patterns are caught
- Integrates with distributed data sources / storage systems (e.g. S3, HDFS etc), works as part of Spark pipelines
- Can be used via Python abstraction (PyDeequ) for those who prefer Python interface over Scala when using Spark
Programming Language: Scala.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.