Advent of 2021, Day 13 – Spark SQL bucketing and partitioning

Series of Apache Spark posts:

Spark SQL includes also JDBC and ODBC drivers that gives the capabilities to read data from other databases. Data is returned as DataFrames and can easly be processes in Spark SQL. Databases that can connect to Spark SQL are:
– Microsoft SQL Server
– MariaDB
– PostgreSQL
– Oracle DB
– DB2

JDBC can also be used with kerberos authentication with keytab, but before use, make sure that the built-in connection provider supports kerberos authentication with keytab.

Using JDBC to store data using SQL:

CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
  url "jdbc:mssql:dbserver",
  Dbtable "dbo.InvoiceHead",
  user 'tomaz',
  password 'pa$$w0rd'
)

INSERT INTO TABLE jdbcTable
SELECT * FROM Invoices

With R, reading and storing data:

# Loading data from a JDBC source
df <- read.jdbc("jdbc:mssql:localhost", "dbo.InvoiceHead", user = "tomaz", password = "pa$$w0rd")

# Saving data to a JDBC source
write.jdbc(df, "jdbc:mssql:localhost", "dbo.InvoiceHead", user = "tomaz", password = "pa$$w0rd")

To optimize the reads, we can also design storing data to Hive. Persistent datasource tables have per-partition metadata store in the Hive metastore. Hive metastore brings couple of benefits: 1) metastore can only return necessary partition for a query and tables in first query are not needed and 2) Hive DLLs are available for tables created with Datasource API.

Partitioning and bucketing

Partitioning and Bucketing in Hive are used to improve performance by eliminating table scans when dealing with a large set of data on a Hadoop file system (HDFS). The major difference between them is how they split the data.

Hive Partition is organising large tables into smaller logical tables based. These partitioning are based on values of columns, which can be one logical table (partition) for each distinct value. A single table can have one or more partitions. These partitioning correspond to am underlying directory map for table that is stored in HDFS.

ZIP code, Year, YearMonth, Region, State, and many more are perfect candidates for partitioning persistent table. If data have 150 ZIP codes, this would create 150 partitions, returning results much faster, when searching using ZIP (ZIP=1000 or ZIP=4283).

Each partition created will also create an underlying directory, where the partition or a single column is stored.

Bucketing is splitting the data into manageable binary files. It is also called clustering. The key to determine the buckets is the bucketing column and is hashed by end-user defined number. Bucketing can also be created on a single column (out of many columns in a table) and these buckets can also be partitioned. This would further split the data – making it inadvertently smaller and improve the query performance. Each bucket is stored as a file within the table’s root directory or within the partitioned directories.

Bucketing and partitioning are applicable only to persistent HIVE tables. With the use of Python, this operations are straightforward:

df.write.bucketBy(42, "name").sortBy("age").saveAsTable("people_bucketed")

and with SQL:

CREATE TABLE users_bucketed_by_name(
  name STRING,
  favorite_color STRING,
  favorite_numbers array<integer>
) USING parquet
CLUSTERED BY(name) INTO 42 BUCKETS;

And you can do both on a single table using SQL:

CREATE TABLE users_bucketed_and_partitioned(
  name STRING,
  favorite_color STRING,
  favorite_numbers array<integer>
) USING parquet
PARTITIONED BY (favorite_color)
CLUSTERED BY(name) SORTED BY (favorite_numbers) INTO 42 BUCKETS;

Tomorrow we will look into SQL Query hints and executions.

Compete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Spark-for-data-engineers

Happy Spark Advent of 2021! 🙂

Tagged with: , , , , , ,
Posted in Spark, Uncategorized
13 comments on “Advent of 2021, Day 13 – Spark SQL bucketing and partitioning
  1. […] by data_admin [This article was first published on R – TomazTsql, and kindly contributed to R-bloggers]. (You can report issue about the content on this page […]

    Like

  2. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

  3. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

  4. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

  5. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

  6. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

  7. […] Dec 13: Spark SQL Bucketing and partitioning […]

    Like

Leave a comment

Follow TomazTsql on WordPress.com
Programs I Use: SQL Search
Programs I Use: R Studio
Programs I Use: Plan Explorer
Rdeči Noski – Charity

Rdeči noski

100% of donations made here go to charity, no deductions, no fees. For CLOWNDOCTORS - encouraging more joy and happiness to children staying in hospitals (http://www.rednoses.eu/red-noses-organisations/slovenia/)

€2.00

Top SQL Server Bloggers 2018
TomazTsql

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Discover WordPress

A daily selection of the best content published on WordPress, collected for you by humans who love to read.

Revolutions

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

tenbulls.co.uk

tenbulls.co.uk - attaining enlightenment with the Microsoft Data and Cloud Platforms with a sprinkling of Open Source and supporting technologies!

SQL DBA with A Beard

He's a SQL DBA and he has a beard

Reeves Smith's SQL & BI Blog

A blog about SQL Server and the Microsoft Business Intelligence stack with some random Non-Microsoft tools thrown in for good measure.

SQL Server

for Application Developers

Business Analytics 3.0

Data Driven Business Models

SQL Database Engine Blog

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Search Msdn

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

R-bloggers

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

R-bloggers

R news and tutorials contributed by hundreds of R bloggers

Data Until I Die!

Data for Life :)

Paul Turley's SQL Server BI Blog

sharing my experiences with the Microsoft data platform, SQL Server BI, Data Modeling, SSAS Design, Power Pivot, Power BI, SSRS Advanced Design, Power BI, Dashboards & Visualization since 2009

Grant Fritchey

Intimidating Databases and Code

Madhivanan's SQL blog

A modern business theme

Alessandro Alpi's Blog

DevOps could be the disease you die with, but don’t die of.

Paul te Braak

Business Intelligence Blog

Sql Insane Asylum (A Blog by Pat Wright)

Information about SQL (PostgreSQL & SQL Server) from the Asylum.

Gareth's Blog

A blog about Life, SQL & Everything ...