Advent of 2023, Day 16 – Creating data pipelines for Fabric data warehouse

In this Microsoft Fabric series:

  1. Dec 01: What is Microsoft Fabric?
  2. Dec 02: Getting started with Microsoft Fabric
  3. Dec 03: What is lakehouse in Fabric?
  4. Dec 04: Delta lake and delta tables in Microsoft Fabric
  5. Dec 05: Getting data into lakehouse
  6. Dec 06: SQL Analytics endpoint
  7. Dec 07: SQL commands in SQL Analytics endpoint
  8. Dec 08: Using Lakehouse REST API
  9. Dec 09: Building custom environments
  10. Dec 10: Creating Job Spark definition
  11. Dec 11: Starting data science with Microsoft Fabric
  12. Dec 12: Creating data science experiments with Microsoft Fabric
  13. Dec 13: Creating ML Model with Microsoft Fabric
  14. Dec 14: Data warehouse with Microsoft Fabric
  15. Dec 15: Building warehouse with Microsoft Fabric

With the Fabric warehouse created and explored, let’s see, how we can use pipelines to get the data into Fabric warehouse.

In the existing data warehouse, we will introduce new data. By clicking “new data”, two options will be available; pipelines and dataflows. Select the pipelines and give it a name.

For the source, I can create a new Azure SQL Database:

Or we can create a simple ADLS Gen2:

But we will create a new delta table using Spark in datalake. First, we need to create a delta table in Spark:

from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, LongType, TimestampType, IntegerType, StringType

spark = SparkSession.builder \
    .appName("CreateEmptyDeltaTable") \
    .config("spark.jars.packages", "io.delta:delta-core_2.12:1.0.0") \
    .getOrCreate()

# Define the schema
schema = StructType([
    StructField("ID", LongType(), False),
    StructField("TimeIngress", TimestampType(), True),
    StructField("valOfIngress", IntegerType(), True),
    StructField("textIngress", StringType(), True)
])

# Create an empty DataFrame with the specified schema
empty_df = spark.createDataFrame([], schema=schema)

# Write the empty DataFrame as a Delta table
empty_df.write.format("delta").mode("overwrite").save("abfss://1860beee-xxxxxxxxx92e1@onelake.dfs.fabric.microsoft.com/a574dxxxxxxxxx28f/Tables/SampleIngress")

spark.stop()

and we will create another notebook to use it in the pipeline. This notebook will serve only as ingress into the delta table:

from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, LongType, TimestampType, IntegerType, StringType
from pyspark.sql.functions import col, current_timestamp
import random
import string

# Create a Spark session
spark = SparkSession.builder \
    .appName("InsertRandomData2SampleIngressTable") \
    .config("spark.jars.packages", "io.delta:delta-core_2.12:1.0.0") \
    .getOrCreate()

# Define the schema
schema = StructType([
    StructField("ID", LongType(), False),
    StructField("TimeIngress", TimestampType(), True),
    StructField("valOfIngress", IntegerType(), True),
    StructField("textIngress", StringType(), True)
])

random_data = [(id, None, random.randint(1, 100), ''.join(random.choices(string.ascii_letters, k=50))) for id in range(11, 21)]
random_df = spark.createDataFrame(random_data, schema=schema)

random_df.write.format("delta").mode("append").save("abfss://1860beeexxxxxxb92e1@onelake.dfs.fabric.microsoft.com/a574d1a3-xxxxxxxx8f/Tables/SampleIngress")

spark.stop()

And by creating a DWH_pipeline, you can choose copy data from delta table to DWH table

with the following mapping:

So now we have created:
1) notebook for initial delta table creation
2) notebook for inserting random data into the delta table
3) Table in the warehouse as a destination
4) pipeline for copying data from the delta table to the DWH table.

We can also schedule the notebook for inserting random data into the delta table to be executed every 10 seconds and we can schedule the pipeline to run every minute and observe the results.

So we have two schedules:

In data warehouse, we can now observe the records:

And after two minutes, the final count of rows is growing.

And you can also check for the runs of the notebooks (first print screen) and pipelines (second print screen):

And the best way to check the runs is to use Monitoring hub:

Tomorrow we will looking into the Power BI.

Complete set of code, documents, notebooks, and all of the materials will be available at the Github repository: https://github.com/tomaztk/Microsoft-Fabric

Happy Advent of 2023! 🙂

Tagged with: , , , , , , , ,
Posted in Fabric, Power BI, Uncategorized
9 comments on “Advent of 2023, Day 16 – Creating data pipelines for Fabric data warehouse
  1. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  2. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  3. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  4. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  5. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  6. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  7. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

  8. […] Dec 16: Creating data pipelines for Fabric data warehouse […]

    Like

Leave a comment

Follow TomazTsql on WordPress.com
Programs I Use: SQL Search
Programs I Use: R Studio
Programs I Use: Plan Explorer
Rdeči Noski – Charity

Rdeči noski

100% of donations made here go to charity, no deductions, no fees. For CLOWNDOCTORS - encouraging more joy and happiness to children staying in hospitals (http://www.rednoses.eu/red-noses-organisations/slovenia/)

€2.00

Top SQL Server Bloggers 2018
TomazTsql

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Discover WordPress

A daily selection of the best content published on WordPress, collected for you by humans who love to read.

Revolutions

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

tenbulls.co.uk

tenbulls.co.uk - attaining enlightenment with the Microsoft Data and Cloud Platforms with a sprinkling of Open Source and supporting technologies!

SQL DBA with A Beard

He's a SQL DBA and he has a beard

Reeves Smith's SQL & BI Blog

A blog about SQL Server and the Microsoft Business Intelligence stack with some random Non-Microsoft tools thrown in for good measure.

SQL Server

for Application Developers

Business Analytics 3.0

Data Driven Business Models

SQL Database Engine Blog

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

Search Msdn

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

R-bloggers

Tomaz doing BI and DEV with SQL Server and R, Python, Power BI, Azure and beyond

R-bloggers

R news and tutorials contributed by hundreds of R bloggers

Data Until I Die!

Data for Life :)

Paul Turley's SQL Server BI Blog

sharing my experiences with the Microsoft data platform, SQL Server BI, Data Modeling, SSAS Design, Power Pivot, Power BI, SSRS Advanced Design, Power BI, Dashboards & Visualization since 2009

Grant Fritchey

Intimidating Databases and Code

Madhivanan's SQL blog

A modern business theme

Alessandro Alpi's Blog

DevOps could be the disease you die with, but don’t die of.

Paul te Braak

Business Intelligence Blog

Sql Insane Asylum (A Blog by Pat Wright)

Information about SQL (PostgreSQL & SQL Server) from the Asylum.

Gareth's Blog

A blog about Life, SQL & Everything ...