dbldatagen.datasets.benchmark_groupby module
- class BenchmarkGroupByProvider[source]
Bases:
NoAssociatedDatasetsMixin
,DatasetProvider
Grouping Benchmark Dataset
This is a benchmarking dataset for evaluating groupBy operations on columns of different type and cardinality.
- It takes the following options when retrieving the table:
random: if True, generates random data
rows : number of rows to generate
partitions: number of partitions to use
groups: number of groups within the dataset
percentNulls: percentage of nulls within the non-base columns
As the data specification is a DataGenerator object, you can add further columns to the data set and add constraints (when the feature is available)
Note that this datset does not use any features that would prevent it from being used as a source for a streaming dataframe, and so the flag supportsStreaming is set to True.
- ALLOWED_OPTIONS = ['groups', 'percentNulls', 'rows', 'partitions', 'tableName', 'random']
- COLUMN_COUNT = 12
- DEFAULT_NUM_GROUPS = 100
- DEFAULT_PCT_NULLS = 0.0
- MAX_LONG = 9223372036854775807
- getTableGenerator(sparkSession, *, tableName=None, rows=-1, partitions=-1, **options)[source]
Gets data generation instance that will produce table for named table
- Parameters:
sparkSession – Spark session to use
tableName – Name of table to provide
rows – Number of rows requested
partitions – Number of partitions requested
autoSizePartitions – Whether to automatically size the partitions from the number of rows
options – Options passed to generate the table
- Returns:
DataGenerator instance to generate table if successful, throws error otherwise
Implementors of the individual data providers are responsible for sizing partitions for the datasets based on the number of rows and columns. The number of partitions can be computed based on the number of rows and columns using the autoComputePartitions method.