dbldatagen.datasets.basic_user module

class BasicUserProvider[source]

Bases: NoAssociatedDatasetsMixin, DatasetProvider

Basic User Data Set

This is a basic user data set with customer id, name, email, ip address, and phone number.

It takes the following optins when retrieving the table:
  • random: if True, generates random data

  • dummyValues: number of additional dummy value columns to generate (to widen row size if necessary)

  • rows : number of rows to generate. Default is 100000

  • partitions: number of partitions to use. If -1, it will be computed based on the number of rows

As the data specification is a DataGenerator object, you can add further columns to the data set and add constraints (when the feature is available)

Note that this datset does not use any features that would prevent it from being used as a source for a streaming dataframe, and so the flag supportsStreaming is set to True.

COLUMN_COUNT = 5
MAX_LONG = 9223372036854775807
getTableGenerator(sparkSession, *, tableName=None, rows=-1, partitions=-1, **options)[source]

Gets data generation instance that will produce table for named table

Parameters:
  • sparkSession – Spark session to use

  • tableName – Name of table to provide

  • rows – Number of rows requested

  • partitions – Number of partitions requested

  • autoSizePartitions – Whether to automatically size the partitions from the number of rows

  • options – Options passed to generate the table

Returns:

DataGenerator instance to generate table if successful, throws error otherwise

Implementors of the individual data providers are responsible for sizing partitions for the datasets based on the number of rows and columns. The number of partitions can be computed based on the number of rows and columns using the autoComputePartitions method.