Finding duplicates in data frame across columns and replacing them with unique values using R

Suppose you have a dataset with many variables, and you want to check:

  • if there are any duplicated for each of the observation
  • replace duplicates with random value from pool of existing values.

 

In this manner, let’s create a sample dataset:

df <- structure(list(
   v1 = c(10,20,30,40,50,60,70,80)
  ,v2 = c(5,7,6,8,6,8,9,4)
  ,v3 = c(2,4,6,6,7,8,8,4)
  ,v4 = c(8,7,3,1,8,7,8,4)
  ,v5 = c(2,4,6,7,8,9,9,3))
  ,.Names = c("ID","a", "b","d", "e")
  ,.typeOf = c("numeric", "numeric", "numeric","numeric","numeric")
  ,row.names = c(NA, -8L)
  ,class = "data.frame"
  ,comment = "Sample dataframe for duplication example")

which has the following interesting characteristics:

2019-08-04 23_49_16-RStudio

Upon closer inspection, one will see that there are many duplicated values across different variables (variable ID, variable a, variable b, variable d and variable e). So let’s focus on:

  • row 2 has two times duplicated values (2x value 4 and 2x value 7)
  • row 3 has three times duplicated values (3x value 6)

Our pool of possible replacement values are:

    possible_new_values <- c(1,2,3,4,5,6,7,8,9

 

Creating loop for slicing the data, loop through the duplicated positions, at the end looks like:

for (row in 1:nrow(df)) {
      vec = df %>% slice(row) %>% unlist() %>% unname()
      #check for duplicates
      if(length(unique(vec)) != length(df)) {
        positions <- which(duplicated(vec) %in% c("TRUE"))
        #iterate through positions
        for(i in 1:length(positions)) {
        possible_new_values <- c(1,2,3,4,5,6,7,8,9)
        df[row,positions[i]]  <- sample(possible_new_values
                           [ ! possible_new_values %in% unique(vec)],1)
        }
      }
    }

revealing the final replacement of values in

2019-08-05 00_08_14-RStudio

So the end result, when putting old data.frame and the new data.frame (with replaced values) side by side, it looks like:

2019-08-05 00_19_18-Presentation1 - PowerPoint

Showing how replacement works per each row across given columns | rows.

Niffy, yet useful data de-duplication or data replacements, when you need one.

As always, code is available at Github.

Happy coding with R 🙂

 

Advertisements
Tagged with: , , , ,
Posted in Uncategorized

Python Pandas MultiIndex and reading data from SQL Server

Python Pandas multiIndex is a hierarchical indexing over multiple tuples or arrays of data, enabling advanced dataframe wrangling and analysis on higher data dimensionality.

2019-06-26 05_49_58-Window

Everyone have had come across multiIndex in Python Pandas and had little annoyancens as the first time.

1. Python Pandas MultiIndex in SQL Server

With this in mind, we can create an example of pandas dataframe:

import pandas as pd

dt = pd.DataFrame([
                    ['2019-05-12','python',6],
                    ['2019-05-13','python',5],
                    ['2019-05-14','python',10],
                    ['2019-05-12','t-sql',12],
                    ['2019-05-13','t-sql',12],
                    ['2019-05-14','t-sql',12]
                  ],
                  columns = ['date','language','version'])

And the results are:

2019-06-05 17_46_50-Window

MultiIndex will give further data wrangling the capability of higher dimensionality wrangling. By simple:

EXEC sp_execute_external_script
	 @language = N'Python'
	,@script = N'
import pandas as pd

dt = pd.DataFrame([
                    [''2019-05-12'',''python'',6],
                    [''2019-05-13'',''python'',5],
                    [''2019-05-14'',''python'',10],
                    [''2019-05-12'',''t-sql'',12],
                    [''2019-05-13'',''t-sql'',12],
                    [''2019-05-14'',''t-sql'',12]
                  ],
                  columns = [''date'',''language'',''version''])

OutputDataSet = dt'
WITH RESULT SETS
((
  py_date SMALLDATETIME
 ,py_lang VARCHAR(10)
 ,py_ver TINYINT
))

So what happens, when we have data with primary key over multiple columns and we want to maintain this dimensionality in Python, as well as, propagate the results using Python Pandas with MultiIndex.

Now, let’s run the same example with additional option of adding pandas MultiIndex:

EXEC sp_execute_external_script
	 @language = N'Python'
	,@script = N'
import pandas as pd

dt = pd.DataFrame([
                    [''2019-05-12'',''python'',6],
                    [''2019-05-13'',''python'',5],
                    [''2019-05-14'',''python'',10],
                    [''2019-05-12'',''t-sql'',12],
                    [''2019-05-13'',''t-sql'',12],
                    [''2019-05-14'',''t-sql'',12]
                  ],
                  columns = [''date'',''language'',''version''])
dt.set_index([''language'', ''version''], inplace=True)
OutputDataSet = dt'
WITH RESULT SETS
((
 py_date SMALLDATETIME
,py_lang VARCHAR(10)
,py_ver TINYINT
))

We get in return an error message:

2019-06-26 06_06_14-Window

Msg 11537, Level 16, State 3, Line 135
EXECUTE statement failed because its WITH RESULT SETS clause specified 3 column(s) for result set number 1, but the statement sent 1 column(s) at run time.

This is due to the fact, how pandas MultiIndex operates. It in-places the index columns (hence multiIndex) into same column and keep it for higher dimensionality data wrangling. The error message would have not occurred, if I would have changed the RESULT SETS to a single column (py_date). Same can be seen in your favorite Python IDE, how columns are in-placed in index:

2019-06-26 06_12_00-Window

To make your exports using pandas dataframes in in-database Machine Learning services in your SQL Server, one way is to concatenate the index within Python, to preserve (and expose) the information of index.

-- MultiIndex with inplace True
-- adding a concatenated value
EXEC sp_execute_external_script
 @language = N'Python'
,@script = N'

import pandas as pd
dt = pd.DataFrame([
                    [''2019-05-12'',''python'',6],
                    [''2019-05-13'',''python'',5],
                    [''2019-05-14'',''python'',10],
                    [''2019-05-12'',''t-sql'',12],
                    [''2019-05-12'',''t-sql'',12],
                    [''2019-05-12'',''t-sql'',12]
                  ],
                  columns = [''date'',''language'',''version''])

dt[''PreservedIndex''] = dt[''language''].astype(str) + '';'' + \
 dt[''version''].astype(str)

dt.set_index([''language'', ''version''], inplace=True)
OutputDataSet=dt'
WITH RESULT SETS
((
  py_date SMALLDATETIME
 ,py_PreservedIndex VARCHAR(30)
))

And the results would be:

2019-06-26 06_24_40-Window

So we managed to exposed the constructed MultiIndex back to SQL Server.

2. How are SQL Server Indexes manages by Python Pandas

Continuing with the same example, let’s create a SQL Server table and with a Primary key constraint.

-- Going from T-SQL
-- And why T-SQL constraints does not play a role in Python

DROP TABLE IF EXISTS PyLang;
GO

CREATE TABLE PyLang (
	 [Date] DATETIME NOT NULL
	,[language] VARCHAR(10) NOT NULL
	,[version] INT
	,CONSTRAINT PK_PyLang PRIMARY KEY(date, language)
);
GO

INSERT INTO PyLang (date, language, version)
          SELECT '2019-05-12', 'python', 6
UNION ALL SELECT '2019-05-13', 'python', 5
UNION ALL SELECT '2019-05-14', 'python', 10
UNION ALL SELECT '2019-05-12', 't-sql', 12
UNION ALL SELECT '2019-05-13', 't-sql', 12
UNION ALL SELECT '2019-05-14', 't-sql', 12;

SELECT * FROM PyLang;
GO

with a simple dataset as:

2019-06-26 06_37_00-Window

With index created on Date and Language, we can easily override this in Python pandas and created different MultiIndex with different columns:

EXEC sp_execute_external_script
 @language = N'Python'
,@script = N'

import pandas as pd

dt = InputDataSet
dt[''PreservedIndex''] = dt[''language''].astype(str) + '';'' + \
 dt[''version''].astype(str)
dt.set_index([''language'', ''version''], inplace=True)
OutputDataSet=dt'
,@input_data_1 = N'SELECT * FROM PyLang'
WITH RESULT SETS
((
  py_date SMALLDATETIME
 ,py_MyPreservedIndex VARCHAR(40)
))

With the results

2019-06-26 06_41_08-Window

showing that  pandas MultiIndex was created using Language and Version column, as in T-SQL table, index was created on Date and Language.

Conclusion

  1. SQL Server and Python Pandas Indexes are two different worlds and should not be mixed.
  2. SQL Server uses Index primarily for DML operations and to keep data ACID.
  3. Python Pandas uses Index and MultiIndex for keeping data dimensionality when performing data wrangling and statistical analysis.
  4. SQL Server Index and Python Pandas Index don’t know about each other’s existence, meaning if user want to propagate the T-SQL index to Python Pandas (in order to minimize the impact of duplicates, missing values or to impose the relational model), it needs to be introduced and created, once data enters “in the python world”.
  5. When inserting Python Pandas MultiIndex results into SQL Server table with index, make sure that the data dimensionality fits together and in case of T-SQL primary keys, make sure, Python code is not producing any duplicates.
  6. Both, Python and T-SQL, indexes are help you having consistent data
  7. Performance is down-graded in both cases, if indexes are not used correctly or when overused.

 

As always, code is available at GitHub.

Happy coding.

Tagged with: , , , , , , ,
Posted in Uncategorized

Creating data frame using structure() function in R

Structure() function is a simple, yet powerful function that describes a given object with given attributes. It is part of base R language library, so there is no need to load any additional library. And also, since the function was part of S-Language, it is in the base library from the earlier versions, making it backward or forward compatible.

Example:

dd <- structure(list( 
         year = c(2001, 2002, 2004, 2006) 
        ,length_days = c(366.3240, 365.4124, 366.5323423, 364.9573234)) 
        ,.Names = c("year", "length of days") 
        ,row.names = c(NA, -4L) 
        ,class = "data.frame")

All objects created using structure() – whether homogeneous (matrix, vector) or heterogeneous (data.frame, list) – have additional metadata information stored, using attributes. Like creating a simple vector with additional metadata information:

just_vector <- structure(1:10, comment = "This is my simple 
                                       vector with info")

And by using function:

attributes(just_vector)

We get the information back:

$`comment`
[1] "This is my simple vector with info"

In one go

So, let us suppose you want to create a structure (S3) in one step. The following would create a data.frame (heterogeneous) with several steps:

year = c(1999, 2002, 2005, 2008)
pollution = c(346.82,134.308821199349, 130.430379885892, 88.275457392443)
dd2 <- data.frame(year,pollution)
dd2$year <- as.factor(dd2$year)

Using structure, we can do this simpler and faster:

dd <- structure(list( 
   year = as.factor(c(2001, 2002, 2004, 2006))
  ,length_days = c(366.3240, 365.4124, 366.5323423, 364.9573234)) 
  ,.Names = c("year", "length of days") 
  ,row.names = c(NA, -4L) 
  ,class = "data.frame")

 

Useful cases when using structure() function are:

  • when creating a smaller data-set within your Jupyter  notebook (using Markdown )
  • when creating data-sets within your R code demo/example (and not using external CSV / TXT / JSON files)
  • when describing a given object with mixed data types (e.i.: data frame) and prepare it for data import
  • when creating many R environments and each have independent data-set
  • for persisting data
  • and many more…

Constructing data-frame with additional attributes and comments.

dd3 <- structure(list(
   v1 = as.factor(c(2001, 2002, 2004, 2006))
  ,v2 = I(c(2001, 2002, 2004, 2006))
  ,v3 = ordered(c(2001, 2002, 2004, 2006))
  ,v4 = as.double(c(366.3240, 365.4124, 366.5323423, 364.9573234)))
  ,.Names = c("year", "AsIs Year","yearO", "length of days")
  ,.typeOf = c("factor", "numeric", "ordered","numeric")
  ,row.names = c(NA, -4L)
  ,class = "data.frame"
  ,comment = "Ordered YearO for categorical analysis and other variables")

Nesting lists within lists can also be done, or even preserving the original data-sets as sub-list, hidden from the dataframe, can also be an option.

And checking comments can be done as:

attributes(dd3)$comment

attr(dd3, which="comment")

 

Both yield same results, as:

> attributes(dd3)$comment
[1] "Ordered YearO for categorical analysis and other variables"
> attr(dd3, which="comment")
[1] "Ordered YearO for categorical analysis and other variables"

 

This simple, yet very useful code example with effective function is as always, available at Github.

Happy Rrrring! 🙂

Tagged with: , , , ,
Posted in Uncategorized

Number 6174 or Kaprekar constant in R

Not always is the answer 42 as explained in Hitchhiker’s guide. Sometimes it is also 6174.

2019-02-17 10_38_38-Presentation1 - PowerPoint

Kaprekar number is one of those gems, that makes Mathematics fun. Indian recreational mathematician D.R.Kaprekar, found number 6174 – also known as Kaprekar constant – that will return the subtraction result when following this rules:

  1.  Take any four-digit number, with minimum of two different numbers (1122 or 5151 or 1001 or 4375 and so on.)
  2. Sort the taken number and sort it descending order and ascending order.
  3. Subtract the descending number from ascending number.
  4. Repeat step 2. and 3. until you get the result 6174

In practice, e.g.: number 5462, the steps would be:

6542 - 2456 = 4086
8640 -  468 = 8172
8721 - 1278 = 7443
7443 - 3447 = 3996
9963 - 3699 = 6264
6642 - 2466 = 4176
7641 - 1467 = 6174

or for number 6235:

6532 - 2356 = 4176
7641 - 1467 = 6174

Based on different number, the steps might vary.

Function for Kaprekar is:

kap <- function(num){
    #check the len of number
    if (nchar(num) == 4) {
        kaprekarConstant = 6174
        while (num != kaprekarConstant) {
          nums <- as.integer(str_extract_all(num, "[0-9]")[[1]])
          sortD <- as.integer(str_sort(nums, decreasing = TRUE))
          sortD <- as.integer(paste(sortD, collapse = ""))
          sortA <- as.integer(str_sort(nums, decreasing = FALSE))
          sortA <- as.integer(paste(sortA, collapse = ""))
          num = as.integer(sortD) - as.integer(sortA)
          r <- paste0('Pair is: ',as.integer(sortD), ' and ', as.integer(sortA), ' and result of subtraction is: ', as.integer(num))
          print(r)
         }
    } else {
      print("Number must be 4-digits")
    }
}

 

Function can be used as:

kap(5462)

and it will return all the intermediate steps until the function converges.

[1] "Pair is: 6542 and 2456 and result of subtraction is: 4086"
[1] "Pair is: 8640 and 468  and result of subtraction is: 8172"
[1] "Pair is: 8721 and 1278 and result of subtraction is: 7443"
[1] "Pair is: 7443 and 3447 and result of subtraction is: 3996"
[1] "Pair is: 9963 and 3699 and result of subtraction is: 6264"
[1] "Pair is: 6642 and 2466 and result of subtraction is: 4176"
[1] "Pair is: 7641 and 1467 and result of subtraction is: 6174"

And to make the matter more interesting, let us find the distribution, based on all valid four-digit numbers, and append the number of steps needed to find the constant.

First, we will find the solutions for all four-digit numbers and store the solution in dataframe.

Create the empty dataframe:

#create empty dataframe for results
df_result <- data.frame(number =as.numeric(0), steps=as.numeric(0))
i = 1000
korak = 0

And then run the following loop:

# Generate the list of all 4-digit numbers
while (i <= 9999) {
   korak = 0
   num = i
   while ((korak <= 10) & (num != 6174)) {
      nums <- as.integer(str_extract_all(num, "[0-9]")[[1]])
      sortD <- as.integer(str_sort(nums, decreasing = TRUE))
      sortD <- as.integer(paste(sortD, collapse = ""))
      sortA <- as.integer(str_sort(nums, decreasing = FALSE))
      sortA <- as.integer(paste(sortA, collapse = ""))
      num = as.integer(sortD) - as.integer(sortA)

     korak = korak + 1
    if((num == 6174)){
     r <- paste0('Number is: ', as.integer(i), ' with steps: ', as.integer(korak))
     print(r)
     df_result <- rbind(df_result, data.frame(number=i, steps=korak))
   }
 }
i = i + 1
}

 

Fifteen seconds later, I got the dataframe with solutions for all valid (valid solutions are those that comply with step 1 and have converged within 10 steps) four-digit numbers.

2019-02-17 16_07_56-RStudio

Now we can add some distribution, to see how solutions are being presented with numbers. Summary of the solutions shows in average 4,6 iteration (mathematical subtractions) were needed in order to come to number 6174.

2019-02-17 16_15_49-RStudio

But adding the counts to steps, we get the most frequent solutions:

table(df_result$steps)
hist(df_result$steps)

2019-02-17 16_33_50-RStudio

With some additional visual, you can see the results as well:

library(ggplot2)
library(gridExtra)

#par(mfrow=c(1,2))
p1 <- ggplot(df_result, aes(x=number,y=steps)) + 
geom_bar(stat='identity') + 
scale_y_continuous(expand = c(0, 0), limits = c(0, 8))

p2 <- ggplot(df_result, aes(x=log10(number),y=steps)) + 
geom_point(alpha = 1/50)

grid.arrange(p1, p2, ncol=2, nrow = 1)

And the graph:

2019-02-17 16_29_39-Plot Zoom

A lot of numbers converges on third step, meaning that every 4th or 5th number.  We would need to look into the steps of the solutions, what these numbers have in common. This will follow! So stay tuned.

Fun fact: For the time of writing this blog post, the number 6174 was not constant in R base. 🙂

As always, code is available at Github.

 

Happy Rrrring 🙂

Tagged with: , , , ,
Posted in Uncategorized

Installing R using Powershell

Installing R from scratch and creating your favorite IDE setup is especially useful when making fresh installation or when you are developing and testing out different versions.

This blogpost will guide you through some essential steps (hopefully, there will not be many) on how to download the desired R engine, desired R GUI – in this case RStudio, and how to prepare the additional packages with some custom helper functions to be used in the client set-up / environment. And mostly, using PowerShell script.

2019-02-14 20_03_23-Window

Test folder for this new R Environment will be: C:\DataTK\99_REnv\01_Source\.  And the rest of the folder structure will be:

2019-02-14 20_59_45-Window

Folder structure is completely arbitrary and can be changed, accordingly.

1. Downloading the RStudio and R

All the programs will be installed with predefined paths (Please note, this path might vary on your client machine):

  • RStudio ->  c:\Program Files\RStudio
  • R Engine -> c:\Program Files\R\R-3.5.1

Both paths can be different on your machine. In the folder structure, I will set my folder pointing to 01_Source sub-folder, as shown in ps script.

$dir = "C:\DataTK\99_REnv\01_Source\"
Set-Location $dir

## Download RSTudio for Windows machine

# Version of RStudio is deliberatly set to specific version
# so that code is repeatable and always returns same results
$urlRStudio = "https://download1.rstudio.org/RStudio-1.1.463.exe"
$outputRStudio = "$dir\RStudio.exe"

$wcRStudio = New-Object System.Net.WebClient
$wcRStudio.DownloadFile($urlRStudio, $outputRStudio) # $PSScriptRoot 
Write-Output "Download Completed"

## Download R engine for Windows machine
$urlR = "https://cran.r-project.org/bin/windows/base/R-3.5.2-win.exe"
$outputR = "$dir\R-win.exe"
$wcR = New-Object System.Net.WebClient
$wcR.DownloadFile($urlR, $outputR)
Write-Output "Download completed"

## Installing R / RStudio on desired Path
## Silent install
$dirRStudio = $dir + "RStudio.exe"
$dirR = $dir + "R-win.exe"

Start-Process -FilePath $dirRStudio -ArgumentList "/S /v/qn"
Start-Process -FilePath $dirR -ArgumentList "/S /v/qn"

Now that we have the R engine and R Studio installed, you need to repeat the process for downloading the R Packages. In same manner, I will start downloading the specific R packages.

2. Downloading the R packages

For the brevity of this post, I will only download couple of R packages from CRAN repository, but this list is indefinite.

There are ways many ways to retrieve the CRAN packages for particular R version using powershell. I will just demonstrate this by using Invoke-WebRequest cmdlet.

Pointing your cmdlet to URL: https://cran.r-project.org/bin/windows/contrib/3.5  where  list of all packages for this version is available. But first we need to extract the HTML tag where information is stored. Since the URL stores data in a table, we have to navigate to following tag: html>body>table>tbody>tr>td>a where the file name is presented.

2019-02-17 07_37_25-Window

Packages names is retrieved by:

2019-02-17 07_45_48-Window

$ListRPackages= Invoke-WebRequest -Uri "https://cran.r-project.org/bin/windows/contrib/3.5"
$pack = ($ListRPackages.ParsedHtml.getElementsByTagName('a')).outerText

If you have the list of needed packages listed in a txt file, you can read the package names from file and iterate through the webpage and download the files:

$ListPackageLocation = "C:\DataTK\99_REnv\01_SourceList\packages.txt"
$PackList = Get-Content -Path $ListPackageLocation
$dir = "C:\DataTK\99_REnv\01_Source\"

ForEach ($Name in $PackList)
{
   $UrlRoot = "https://cran.r-project.org/bin/windows/contrib/3.5/"
   $url = $UrlRoot + $Name
   $FileName = $dir +'\' + $Name
   $PackagesOut = New-Object System.Net.WebClient
   $PackagesOut.DownloadFile($url, $FileName) 
   Write-Output "Download Completed"
}

 

Now that we have all the packages downloaded and programs installed, we can move to R.

3. Setting up the R Environment

In the folder structure, there is a folder including the helper files:

2019-02-17 08_15_56-03_RHelperFiles.png

Paths.R

In this file all the paths are typed and later used in any other file. Simply the folder structure is described:

sourcePath = "c:\\DataTK\\R_packages\\01_Source"
sourcePackagePath = "c:\\DataTK\\R_packages\\01_Sourcelist"
libPath = "C:\\DataTK\\R_Packages\\02_R"
wdPath = "C:\\DataTK\\R_Packages"

 

Functions.R

This file includes all the functions lists in one place, mainly for sharing or creating shared projects. In  this case, just two functions, one for checking and installing missing packages, read from the folder structures (that were previously downloaded using powershell).

# Function for sum of squares for two input integers
sum_squares <- function(x,y) {
  x^2 + y^2
}

# Function for package installation with check for existing packages
function_install_4 <- function(df_name) {
    for (i in 1:nrow(df_name)){
       if (df_name[i,2] %in% rownames(installed.packages(lib.loc=libPath))){
       #print(df_name[i,2])
       print(paste0("Package ",df_name[i,2], " already installed."))
   } else {
     install.packages(df_name[i,1], type="source", repos=NULL, lib=libPath)
          }
     }
 }

 

Intial.R

This file wraps all the helper files in one place and invokes all the functions from packages and paths:

# Loading files with function lists and Paths
source(file="C:\\DataTK\\R_Packages\\paths.R")
source(file="C:\\DataTK\\R_Packages\\functions.R")

#updating the list of packages
setwd(sourcePackagePath)
listPackages <- data.frame(read.csv("packages.txt", header=FALSE))
names(listPackages)[1] <- "name"

#just names of the packages
temp <- strsplit(as.character(listPackages$name),"_")
temp <- data.frame(library=matrix(unlist(temp), ncol=2, byrow=TRUE)[,1])
listPackages<- cbind(name=listPackages, library=temp$library)


#installing the missing packages
setwd(sourcePath)
function_install_4(listPackages)

library(dplyr, lib.loc=libPath)
library(ggplot2, lib.loc=libPath)
library(knitr, lib.loc=libPath)

4. Start using R

Finally, every new R file or projects needs to have a single line included:

#initialize
source(file="C:\\DataTK\\R_Packages\\initial.R")

And this will load all the settings, all the packages and make sure the environment downloaded are correctly installed.

 

As always, complete code is available on Github.

Happy coding and happy Rrrrring 🙂

Tagged with: , , , , ,
Posted in Uncategorized

Window Aggregate operator in batch mode in SQL Server 2019

So this came as a surprise, when working on calculating simple statistics on my dataset, in particular min, max and median. First two are trivial. The last one was the one, that caught my attention.

While finding the fastest way on calculating the median (statistic: median) for given dataset, I have stumbled upon an interesting thing.  While WINDOW function was performing super slow and calling R or Python using sp_execute_xternal_script outperform window function as well, it raised couple of questions.

But first, I created a sample table and populate it sample rows:

DROP TABLE IF EXISTS  t1;
GO

CREATE TABLE t1
(id INT IDENTITY(1,1) NOT NULL
,c1 INT
,c2 SMALLINT
,t VARCHAR(10) 
)

SET NOCOUNT ON;
INSERT INTO t1 (c1,c2,t)
SELECT 
	x.* FROM
(
	SELECT 
	ABS(CAST(NEWID() AS BINARY(6)) %1000) AS c1
	,ABS(CAST(NEWID() AS BINARY(6)) %1000) AS c2
	,'text' AS t
) AS x
	CROSS JOIN (SELECT number FROM master..spt_values) AS n
	CROSS JOIN (SELECT number FROM master..spt_values) AS n2
GO 2

 

Query generated – in my case – little over 13 million records, just enough to test the performance.

So starting with calculating Median, but sorting first half and second half of rows respectively, the calculation time was surprisingly long:

-- Itzik Solution
SELECT (
(SELECT MAX(c1) FROM
  (SELECT TOP 50 PERCENT c1 FROM t1 ORDER BY c1) AS BottomHalf)
+
(SELECT MIN(c1) FROM
  (SELECT TOP 50 PERCENT c1 FROM t1 ORDER BY c1 DESC) AS TopHalf)
) / 2 AS Median

Before and after each run, I cleaned the stored execution plan. The execution on 13 million rows took – on my laptop – around 45 seconds.

Next query, for median calculation was a window function query.

SELECT DISTINCT
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY c1) 
       OVER (PARTITION BY (SELECT 1)) AS MedianCont
FROM t1

To my surprise, the performance was even worse, and at this time, I have to say, I was running this on SQL Server 2017 with CU7. But luckily, I had a SQL Server 2019 CTP 2.0 also installed and here, with no further optimization the query ran little over 1 second.

So the difference between the versions was enormous. I could replicate the same results by switching the database compatibility level from 140 to 150, respectively.

ALTER DATABASE SQLRPY 
SET COMPATIBILITY_LEVEL = 140; 
GO
SELECT DISTINCT
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY c1) 
    OVER (PARTITION BY (SELECT 1)) AS MedianCont140
FROM t1

ALTER DATABASE SQLRPY 
SET COMPATIBILITY_LEVEL = 150; 
GO

SELECT DISTINCT
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY c1) 
    OVER (PARTITION BY (SELECT 1)) AS MedianCont150
FROM t1

The answer was found in execution plan. When running window function under 140 compatibility level, execution plan decides to create nested loop two times, for both groups of upper and lower 50% of the dataset.comp140_1

This plan is is somehow similar to understanding of 50% of upper and lower dataset but with only one nested loop:

itzik1

Difference is that when running the window function calculation of median on SQL Server version 2017, the query optimizer decides to take row execution mode for built-in window function with WITHIN GROUP.

comp140_2

This was, as far as I knew, not an issue since SQL Server 2016, where batch mode operator for window aggregation was already used.

When switching to compatibility level 150 and running the same window function, the execution plan is, as expected:

comp150_1

And window aggregate uses batch mode:

comp150_2

When calculating Median using R:

sp_Execute_External_Script
   @language = N'R'
  ,@script = N'd <- InputDataSet
               OutputDataSet <- data.frame(median(d$c1))'
  ,@input_data_1 = N'select c1 from t1'
WITH RESULT SETS (( Median_R VARCHAR(100) ));
GO

or Python:

sp_Execute_External_Script
  @language = N'Python'
 ,@script = N'
import pandas as pd
dd = pd.DataFrame(data=InputDataSet)
os2 = dd.median()[0]
OutputDataSet = pd.DataFrame({''a'':os2}, index=[0])'
 ,@input_data_1 = N'select c1 from t1'
WITH RESULT SETS (( MEdian_Python VARCHAR(100) ));
GO

both are executing and returning the results in about 5 seconds. So no bigger difference between R and Python when handling 13 million rows for calculating simple statistics.

To wrap up, If you find yourself in situation, where you need to calculate – as in my case – Median or any statistics, using window function within group, R or Python would be the fastest solutions, following T-SQL. Unless, you have the ability to use SQL Server 2019, T-SQL is your best choice.

Code and the plans, used in this blog post are available, as always at Github.

Tagged with: , , , , , , , , ,
Posted in Uncategorized

Friday five MVP award blog post

Short blog post on contribution to Friday five MVP Award blog post from September 2018.

Link to the blog:  https://blogs.msdn.microsoft.com/mvpawardprogram/2018/09/28/friday-five-september-28/

Happy New Year 2019!

See you in 2019.

Tagged with: ,
Posted in Uncategorized
Categories
Follow TomazTsql on WordPress.com
Programs I Use
Programs I Use
Programs I Use
Rdeči Noski – Charity

Rdeči noski

100% of donations made here go to charity, no deductions, no fees. For CLOWNDOCTORS - encouraging more joy and happiness to children staying in hospitals (http://www.rednoses.eu/red-noses-organisations/slovenia/)

€2.00

Top SQL Server Bloggers 2018
TomazTsql

Tomaz doing BI and DEV with SQL Server and R, Python and beyond

Discover

A daily selection of the best content published on WordPress, collected for you by humans who love to read.

Revolutions

Tomaz doing BI and DEV with SQL Server and R, Python and beyond

tenbulls.co.uk

attaining enlightenment with sql server, .net, biztalk, windows and linux

SQL DBA with A Beard

He's a SQL DBA and he has a beard

Reeves Smith's SQL & BI Blog

A blog about SQL Server and the Microsoft Business Intelligence stack with some random Non-Microsoft tools thrown in for good measure.

SQL Server

for Application Developers

Business Analytics 3.0

Data Driven Business Models

SQL Database Engine Blog

Tomaz doing BI and DEV with SQL Server and R, Python and beyond

Search Msdn

Tomaz doing BI and DEV with SQL Server and R, Python and beyond

R-bloggers

Tomaz doing BI and DEV with SQL Server and R, Python and beyond

Ms SQL Girl

Julie Koesmarno's Journey In Data, BI and SQL World

R-bloggers

R news and tutorials contributed by hundreds of R bloggers

Data Until I Die!

Data for Life :)

Paul Turley's SQL Server BI Blog

sharing my experiences with the Microsoft data platform, SQL Server BI, Data Modeling, SSAS Design, Power Pivot, Power BI, SSRS Advanced Design, Power BI, Dashboards & Visualization since 2009

Grant Fritchey

Intimidating Databases and Code

Madhivanan's SQL blog

A modern business theme

Alessandro Alpi's Blog

SQL Server, Azure and DLM in a nutshell :D

Paul te Braak

Business Intelligence Blog

Sql Server Insane Asylum (A Blog by Pat Wright)

Information about SQL Server from the Asylum.

Gareth's Blog

A blog about Life, SQL & Everything ...

SQLPam's Blog

Life changes fast and this is where I occasionally take time to ponder what I have learned and experienced. A lot of focus will be on SQL and the SQL community – but life varies.