Alteryx – Extract data from various excel files with various tab names

It was one of those challenging tasks that had to be done and I felt it’s good to blog about it as well.

Task –
Extract data from a limited range of a sheet from various excel files. Each of the files have various tabs in it and the names of the tabs are not unique.

For instance, let’s say I have three excel files with the following name format – MonthlyReport_<<Month>>.xlsx where the Month is for Dec, Jan and Feb.
The data within my ‘Dec’ file is as follows –
1_srcExcel_Dec

Nice, well formatted data. Let’s look at Jan’s data –
1_srcExcel_Jan

Oh, oh! somebody left a comment in the Column E. Let’s now look at the Feb’s data –
1_srcExcel_Feb
Aargh..Another person left another comment this time at Column D.
Now my task is to get all the data from Columns A and B only across all the excel files with the differently named tabs.

Solution-
The first step for us is to get all the sheet names from each of the excel files. To achieve this we need to use a macro and the steps are listed below –

  1. Open a new workflow. Drag in a Input tool and connect it to one of the excel files and choose the option – <List of Sheet Names> and also set the option for ‘Output File Name as Field’ to ‘Full Path’ as shown below. Setting these two options will result in two fields – FileName, Sheet Names as the output –2_ListExcelSheet_InputData
  2. Add a ‘Formula’ transform. Set the ‘FileName’ with the ‘Query’ that’s needed as shown below. The purpose of the below query is to replace the ‘List of Sheet Names’ with the actual Sheet Name and in addition to that the ‘Range’ of data to be fetched. If we don’t specify this and just stick to ‘Sheet Name’, then the workflow would fail as the data is non-uniform. This is important to take note of. I am pasting the query for easy reference. Keep in mind of the tilde character –
    Replace([Filename], “<List of Sheet Names>”, “Select * from `“+[Sheet Names]+”$A1:B100000`“)
    3_ListExcelSheet_ForumlaTool
  3. Drag an ‘Macro Output’ and connect it to the ‘Filter’ transform.
  4. Drag a ‘Control Parameter’ on to the workflow. Connect the ‘search’ icon of Control Parameter to ‘lightning’ icon of the ‘Input Tool’. Doing so you would get an ‘Update Value’ transform in between both. Just leave it as is. By default it would select the ‘File – value’ as shown below. The purpose of dragging in the Control Input is to create a placeholder through which multiple excel files can be passed. If you are coming from SSIS background, you can say this is the equivalent of ‘ForEach’ file transform where the full file path is passed as a parameter.
    4_ListExcelSheet_UpdateValue
  5. Save the workflow as ‘List of sheets Macro.yxmc’ and the full workflow would look like as shown below. (Note- I have added ‘Annotation’ for Control Parameter, Input Tool and Formula Tool) –5_ListExcelSheets_Completed.png

Now we have a macro which can give the full path of the excel file along with the query that we need to get the data from. We now need another macro where we can put the query in and get the data. Remember anytime we need looping, macro is the way to go. The first step we did was just getting the sheet names from each excel file and modifying the file path with the query.

The second macro to do is fairly simple –

  1. Open a new workflow. Drag in a ‘Input Tool’ and connect it to one of the source excel files. In my case I connected it to the ‘Feb’ file. When you connect to the file, here is how ‘Choose Table or Specify Query’ window looks like –
    6_ObtainDataFromExcel_ChooseTable.png
  2. Do not choose any sheet name rather click on ‘SQL Editor’ and paste the following query and click ‘OK’ – SELECT * FROM February$A1:B100000
    as shown below-7_ObtainDataFromExcel_SQLEditor
  3. Drag in ‘Filter’ Transform and connect to the ‘Input Tool’. In the properties, select the column ‘Id’ and set the drop down to ‘Is Not Null’. Basically if you look at the query in the above step I have given arbitrary number B100000. The data may or many not be there. So we need to filter out empty data.
  4. Drag in an ‘Macro Output’ and connect it to the ‘Filter Transform’ above.
  5. Drag in a ‘Control Parameter’ and connect the ‘search’ icon of transform to the ‘lightning’ icon of the ‘Input Data’ transform. An ‘Update Value’ transform would appear in between and you don’t need to do anything. By default it will transform the ‘File – value’ as shown below – 8_ObtainDataFromExcel_InputData.png
  6. Save the workflow with name say – Obtain limited range from excel.yxmc. Here is how it would look like – 9_ObtainDataFromExcel_Overall

Okay, so we now have two macros, one for getting sheet names and modifying the full file path with the requisite query, the other for getting the data from the excel file. We now need another workflow to call these two macros and do our job.

  1. Open a new workflow. Drag in the directory tool and point the location to the path where the source files reside. Set the file specification as ‘.xlsx’ just so that we only get the excel files
  2. Drag in an ‘Formula’ transform and connect it to the directory tool. Add new column and give it a name say ‘FullPath_SheetNames’ and give it the following value – [FullPath]+”|<List of Sheet Names>”.
  3. Right-click on the blank space in the workflow, go to ‘Insert’->’Macro’ -> ‘List of Sheets’ as shown below- 10_ObtainDataFromExcel_InsertFirstMacro.png
  4. Connect the macro to the ‘Formula’ transform. In the ‘Properties’ box of the macro, set the ‘Choose list of sheets input field’ as ‘FullPath_SheetNames’. (Note- in case you have not done any annotations in the macro, then it would appear as Control Parameter Input’
  5. Drag in a ‘Filter’ transform and connect it to the above macro. Set the filter as – [Sheet Names] != “Summary”. This step I am doing because each of our files have two sheets and the sheet that we are interested in is not the ‘Summary’ one. I could have just put in one sheet in all files but I wanted to show you how you can if need be utilize all the sheets separately for different purposes.
  6. Right-click on the blank space in the workflow, go to ‘Insert’->’Macro’ -> ‘Obtain Limited range from Excel’ macro and connect it to the ‘Filter’ transform.
  7. In the properties of the second macro, set the value to ‘FileName’.
  8. Drag in a ‘Browse’ transform to look at the output. (here is where you would typically plug-in your Output Data tool in the real world scenario)

11_ExtractData_Overview

Post-running the workflow here is the output from Browse –
12_ExtractData_Output

That’s it. For my task I had to go through about 12 different excel files with various tab names and the out-of-the-box tools that Alteryx provides is so very powerful to get the job done.

Just falling in love with it.

Alteryx PostgreSQL Input – Character with Byte sequence error

This is one of the most troublesome errors that I had encountered for long time and finally at last I  found a way to get past it.

I kept getting the following error when trying to fetch data from my PostgreSQL source. This error was an arbitrary one which used to pop based on the data within. Here is how the standard error message is shown in Messages window of Alteryx –

Error: Input Data (11): Error SQLExecute: ERROR: character with byte sequence 0xe2 0x80 0xad in encoding “UTF8” has no equivalent in encoding “WIN1252”;
Error while executing the query

The byte sequence that I highlighted corresponds to blank space. I tried trimming out blank spaces in my data. I then got another error this time indicating another character. The first workaround to this problem that I applied was to create one more ODBC data source with a Unicode driver. Now that resulted in another problem altogether. All the date format of the input data had changed to text.

ODBC Admin.png

Here is the actual fix that helped me. Reading online I did see that one would need to change the setting from WIN1252 to UTF8. Now that I didn’t want to do at database level and I wasn’t sure where to put it. It is after bit of tinkering I see putting that in the Pre-Execute SQL Statement does the trick as shown below –

Input Tool

Problem solved.

PostgreSQL Generate Date Ranges using recursive CTE

One of the tasks that I had needed me to have a proxy table of dates using which I had to achieve other business logic. Using PostgreSQL and recursive CTE I was able to achieve this.

Here is the code –

--cte to get range of numbers, which in this case is 1 to 90
with recursive ints (n) AS (
select 1
union all
select n+1 from ints where n+1<= 90
)
--cte to get date ranges by using interval function and
--multiplying the ranges
,cte_dates
as
(
select current_date - interval '90days' + interval '1day' * n as dt
from ints
)
--the query below will now get you last 90 days date ranges
select dt from cte_dates</pre>
<pre>

The point to note is you can multiply the intervals that you specify using the good old multiplier.

Incremental Data Load using Alteryx

As I keep progressing on my journey with Alteryx, I can’t help keep thinking on exploring all the various ways with which this can be stretched and explored. Here are some of the ETL scenarios that I have in mind –

  1. Incremental data loads
  2. Loading files from folder for a certain date range and moving them to archive
  3. Error handling and logging
  4. Performance handling (loading ~1M records)
  5. Configurability and deployment across various environments
  6. Version control

These are what I can think of now. Let me start off this post with the first of the above scenarios i.e. incremental data loads. This is a quite common ETL scenarios that every developer gets to work on over course of his career.

Here is a sample staging table from along with sample data in it. This would become the source table –

create table stg_orders(
id int,
description varchar(10),
amount decimal(6,2),
created_date date
);

insert into stg_orders values (1,'Apples',20,'2017-12-15');
insert into stg_orders values (2,'Bananas',10,'2017-12-15');
insert into stg_orders values (3,'Mangoes',25,'2017-12-15');

Here is the destination table to which data is being loaded.

create table orders(
id int,
description varchar(10),
amount decimal(6,2),
created_date date
);

Incremental loading refers to the process of loading only changed data from source to destination. Identification of this ‘changed’ data would vary based on the business requirements. Most often it is chosen based on the datetime field, which is the case here.

First thing to ensure is to have a controller table at the destination that will hold the last loaded datetime. Below is the controller table that I am using for this demo –

 

create table controller(
last_refreshed_date date
);

With basic structure in place, let’s begin to assemble the workflow. Here is how the final output would look like –
1_Workflow
It’s THAT simple. Just three little transforms. That’s the beauty of Alteryx. All the nitty-gritty itsy bitsy column details are all completely hidden. Let’s look at each one of them. The first controller is a ‘Input Tool’ that gets the max last refreshed date value from the existing data.  Here is the query used for it. When the workflow is first run, the controller would not be having any data in it, hence I have put an arbitary date of 12th December, 2017. Ideally this value would be current date.-

select coalesce(max(last_refreshed_date),'2017-12-12') as last_refreshed_date
from controller;

The second tool is a Dynamic Input tool – ‘Get Orders data’ wherein the filter gets added on and requires bit more explanation.  Once you drag and drop Dynamic Input tool, here are the steps that are to be taken as shown below.

2_DynamicInput_SQLEditor

Click on ‘Edit’ button. In the ‘Connect to a File or Database’ click to obtain the source query and then in the SQL Editor, put the actual source query that fetches the data from the data from the source. As can be seen the date that I specified is an arbitary date. Once you put in the query click on ‘Ok’ in the current and subsequent window to close the wizard. Connect this to the previous transform i.e. the ‘Get Max Last Refreshed Date’ Input tool.
3_DynamicInput_ModifySQLQuery

At this point you are just connected to the source SQL query and we now need to configure it so that filter is udpated. Now select  ‘Modify SQL Query’ and click on ‘Add’ dropdown and choose ‘SQL: Update WHERE Clause’. This will pop-up a new window

This is where you can say ‘magic’ happens.  As you see above, the first part is where you need to select the appropriate part of the where clause. If you have multiple clauses in your WHERE clause, all of these appear in the drop-down. The wizard automatically reads in the filter and identifies the field that needs to be replaced. The second part is where you define the field with which you will be replacing, which in our case is the ‘last_refreshed_date’ and then click on ‘OK’.

4_DynamicInput_SQLWhereClause

Let’s get to the last part i.e. ‘Output Data’ tool – Load data into Orders and update controller table. This control has among many options – ‘Post Create SQL Statement’ which is what we are going to use. Here is the sql statement that I have put in it for –

BEGIN TRANSACTION;

INSERT INTO controller(
last_refreshed_date
)
SELECT MAX(created_date)
FROM    orders;

END TRANSACTION;

This is a feature that even informatica has but only as part of the source. Here in Alteryx, both ‘Input Data’ and ‘Output Data’ has this option. Here is how the output properties would look like –
6_OutputData_Properties

With all set let’s run the workflow. Here is the data from the orders table post-run –

7_Orders_Output
Here is the data in ‘Controller’ table –
8_Controller_Output_Run1

Let’s insert some more data in the source table –

insert into stg_orders values (4,'Apples',30,'2017-12-16');
insert into stg_orders values (5,'Bananas',2,'2017-12-17');

In this run, you can see that only 2 records get picked which is what is expected and here is the data from the orders table –
9_Orders_Output_Run2

That’s about it. Here are few points that I would like to point out –

1. In a typical ETL, the controller table would have few more columns such as Id, Total Records etc. basically a auditing table.
2. The source SQL Query may not be as simple as the one that is pasted in this demo but a complex one especially when loading data from a Operational Data Store (ODS) to a Data Warehouse (DW).
3. At no point of team we needed any variable to perform all these operations.

 

Alteryx – First and Lasting impressions

If not for the new job that I have joined I would not have heard of this tool at all. Alteryx is another kid on the block that is mix of ETL / Analysis / Reporting all blended into one. The reporting part hasn’t impressed me much but some of the the ETL/ Analsyis features that it has just blew me away. In this post I would like to point out few key areas where I feel this just aces miles ahead of SSIS (or) Informatica.

Installation
You just need to go to the site, click on the Download Now. This would then prompt you to register and you MUST enable to their subscriptions. Once done, you would have 14 day trial of the product. It’s that simple.

Preview Data at every single stage post run
At each stage of transformation one can see the data before and after the transformation post the execution of the workflow. I just can’t fathom how are they even doing this. Let me show you with an example

Here is a simple workflow that is taking student data as input, doing some check on Gender (if Male/Female) and then cocatenating first name and last name for both the flows.

1_StudentImport_Overview

If you are coming with a ETL background, you would be able to quickly latch on to the transformations and what they do because they are so intuitive. Even an hours video from youtube would be sufficient to quickly scale up on transformations. If you notice each of the transformations above, there are green button like icons before and after. They are basically input and output (as if you didn’t decipher already). Now let’s say the workflow is run. Post run, I can click on any of these buttons to see the data at that particular stage.

Let me pick ‘Identifiy Gender’ transformation. Once you click on the transformation, all the inputs and outputs of that transformation are available to be previewed. Seen below is how the input data looks like. My condition was to seperate them by Gender –
2_IdentifyGender_Input
If I want to see what the ‘T’ output rows look like (i.e. Males), I just click on it –

2_IdentifyGender_True
Now let me have a look at the ‘F’ output rows –
3_IdentifyGender_False
Imagine this being the case at every stage of the transformation. It’s just incredible to be able to see how your business rules are working at every stage.
Let me just repeat one more time, if you haven’t read the sub heading, this preview is POST-RUN. I just can’t think of any alternative for this in either SSIS/Informatica.

Testing the DFT without Output-
Just scroll little back up and see the screenshot of the workflow I posted. It doesn’t contain an ‘Output’. The last transformation that you see is just a UNION ALL. Say if you are developing a POC or just testing somethings out, you avoid the necessity to create a destination and then dumping the data. Of course, Trash Destination comes quickly to mind from SSIS stack but that’s an add-on and not out-of-the-box feature. I can’t think of any in Informatica though.

Multicasting
Pretty much every single transformation’s output you can multicast and then branch off to do some entirely new logic altogether.

Dynamic column propogation-
I have been saving the best for the last. It has this incredibly advanced capability of bringing in dynamic columns just as if they have been there all along.  Let’s say in the data above, I perform two changes to the input file –
-Added new column say Location at the end
-Added new column Is Married after ‘Last Name’ column

Without doing any changes to the workflow, it just runs without throwing any error and here is my output from ‘Union’ transform –

4_ColumnAddition_WorkflowRun

HOLY COW! It’s just mind-blowing, ain’t it. In terms of data modifications, it was pretty drastic, as in new columns were added not just at the end but also in between, and Alteryx just doesn’t care about it. It just works!

I am sure, I am just scraping through the tip of the iceberg and there is HUGE amount of exploration left to do. What is also fantastic about this product is the community behind it.

Their community forums and learning channels are all free for anyone to ask and learn much like MSDN community or Informatica ones. They have weekly challenges running which are good fun to flex your muscle and give it a try. The whole interface though I feel they can improve on. I feel bit claustrophobic with all the overbearing green color theme and design but you get used to it.

All in all I am loving it. Watch out for future posts where I detail how it fares performance wise, error handling, configurability, looping, dynamic data parsing etc.

Excel Import date getting recoginzed as month

I had a really wierd error this morning when I wanted to import some sample data from CSV into excel. One of the columns in my data was a date column. Upon import here is how the data got displayed when I clicked on the ‘Filter’ column. All the data that I had in it were of October 2017 only but the filter was showing something else –

Incorrect Date Filter

The date value in the column is in the format – dd/mm/yyyy but clearly excel was thinking otherwise i.e. mm/dd/yyyy. I knew right away I had to do some settings change but number of changes I had to do was a lot and I thought it’s better to document it.

Click on Start and go ‘Date & time Settings’. On the far right-hand side, under ‘Related settings’ click on ‘Additional date, time, & regional settings’ as shown below –
AdditionalDateTimeSettings
Under Region, click on ‘Change date, time, or number formats’ as shown below –

Change date,time or number formats

Here are two changes that you would need to do –
1. Under Formats tab, set ‘Format:’ to ‘English (Australia)’ as shown below –

RegionFormats.png
2. Under Location tab, set ‘Home location:’ to ‘Australia’ as shown below –

RegionLocation

This step would necissitate restarting the system. In this example I have shown ‘Australia’ as locale as that’s where I am currently resisding. Do change it to relavent locale.

That’s it. Post restart you would start seeing the way it is expected as shown below –

FinalResult

SSIS vs Informatica – Error Handling

A close friend of mine who also has been, like me, until recently working on MS-BI all through his career has moved on to different stream for performing ETL i.e. using of Informatica.

Having been relatively new to Informatica (with only couple of years and few projects here and there), we had many a arguments on which is better and why. As I stated out in my first post about Informatica,  I didn’t have good feelers with it when starting out. There were too many IDE’s to grapple with, transformations do not look intuitive, simple tasks look very hard to perform etc. One of the arguments he did make is Informatica is a market leader and is lot more powerful owing to it’s code re-usability, out-of-the-box connectivity across varied sources, error handling etc and SSIS is doing now a lot of catch-up.

I decided to dig deep and validate my own findings on the ‘out-of-the-box connectivity and error handling’. This is one of such posts and more to come.

Task –

Load million records from a source table to a destination table. Out of the million records, there are about 10 error records. Source Table would be a constant. There would be two runs – first run with table in Oracle database as destination, second run with table in a SQL Server database as the destination.

I will use both SSIS and Informatica to accomplish this task and see how best both accomplish the task.

Setting up of Source –

I have Oracle 11g express set-up on the machine. My source table is simple table consisting of Id, Name columns with about a million records in it. Id is basically a auto-incrementing column. After it’s populated I am running a script to  update 10 random records to null. Below SQL code performs this –


--Source table to hold the million records
create table SrcEmployeeData
(
 Id int
 ,Name varchar(50)
);

--Insert million records in it. The value level auto-increments
insert into SrcEmployeeData
select level, 'test' from dual connect by level &lt;= 1000000;

--Update 10 random records with Name as NULL
update SrcEmployeeData
set Name = null
where Id in
 (
 select Id
 from (
 select *
 from SrcEmployeeData
 order by DBMS_RANDOM.VALUE
 )
 where ROWNUM &lt;=10
 );

--Commit the data
commit;

Once done, confirm to check there are 1 million records with about 10 records having NULL in Name column.

Task 1 – Tool – Informatica, Destination – Oracle
Here is the definition of destination table in Oracle

create table HR.EmployeeData(
    Id int
   ,Name varchar(50) not null
);

 

 

I created a mapping called m_Load_EmployeeData_Oracle which contains just two  transformations – Source and Destination as shown below –

1_1M_OracleDestinationLoad_Mapping

I then developed a workflow for it wf_m_Load_EmployeeData_Oracle. I then set the Connections in the ‘Config’ section of the session as appropriate and started the workflow. Here is the result –

2_1M_OracleDestinationLoad_Error.png

So right-off the bat, without any tweaking, the workflow got successfully executed and correctly captured the 10 error records with a execution time of 1m 33sec.

Task 2 – Tool – Informatica, Destination – SQL Server

I created destination table in SQL Server with same table definition as in Task 1. I then created a new mapping m_Load_EmployeeData_SQL with same mapping i.e. source connected to destination as shown below –

3_1M_SQLDestinationLoad_Mapping

Created the wofklow wf_Load_EmployeeData_SQL followed by setting appropriate ‘Connections’ in the session properties. Ran the workflow and here is the result –

3_1M_SQLDestinationLoad_Error

10 records successfully captured without any ado and in the same time. Absolutely no difference in time.

Task 3 – Tool – SSIS, Destination – SQL Server

Source still remains the same. Here comes the real kicker even before we get into comparison. If we set-up the DFT with a simple ‘OLE DB Source’ and ‘OLE DB Destination’ components and start the package, it is bound to fail as shown below –

5_1M_SQLDestinationLoad_Package_Error.png

Out-of-the box error capturing just doesn’t exist. Here is what I have done to capture the error records using the 3 DFT technique. This is how it works. Create three copies of your ‘OLE DB Destinations’. Connect your source to the first DFT. Set the properties ‘Rows per batch’ and ‘Maximum insert commit size’ values to 50000 as shown below –

7_1M_SQLDestinationLoad_Package_OLEDST_Setting

Under mappings, the columns would then be auto-mapped by name. Connect the ‘Error Output’ of the first DFT to the second DFT copy. Set the two values for that as 10000. Connect this second DFT’s error re-direction to the third DFT copy. For the final one set, the two values as 1. Rename the tasks appropriately.

Now drag in another DFT, and plug in the third copy of the DFT error output on to this new one. This time we are going to create a new destination table for capturing the error say ‘dbo.EmployeeData_Error’. The table definition is as listed below –


CREATE TABLE [dbo].[EmployeeData_Error](
[ID] [numeric](38, 0) NULL,
[NAME] [nvarchar](50) NULL,
[ErrorCode] [int] NULL,
[ErrorColumn] [int] NULL,
[DateInserted] [datetime] NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[EmployeeData_Error] ADD CONSTRAINT [DF_EmployeeData_Error_DateInserted] DEFAULT (getdate()) FOR [DateInserted]
GO

At the mapping level, ensure to connect the ErrorCode and ErrorColumn of the third copy of DFT wherein you are setting both values to 1, to the same columns of the table EmployeeData_Error as shown below –

10_1M_SQLDestinationLoad_Package_OLEDST_ErrorCapture.png

With this being set-up, the package is executed with no problem whatsoever as shown below-

11_1M_SQLDestinationLoad_Package_Execution

Looks hale and hearty. How about the speed of execution –

11_1M_SQLDestinationLoad_Package_Execution_Speed.png

It completed in just 14 seconds. The fastest so far in the tasks we performed. Now let’s go to the final task.

Task 4 – Tool – SSIS, Destination – Oracle

Before we get into the package set-up, one needs to make sure the Oracle Provider for OLE DB is present. If not, get the latest stable version from the following link. The package set-up would be very simple this time around. In total there would be three tasks – OLE DB Source for fetching the data, 2 OLE DB Destinations (1 for the destination and the other for capturing the error).

The OLE DB Destination for storing the target data will need to have the ‘Data access mode’ set to – ‘Table or View’ and one cannot use ‘Fast Load’ option. Below image shows the set-up that needs to be done –

12_1M_OracleDestinationLoad_Package_OLEDTSSetting.png

With that set, here is the snapshot of the package run –

16_1M_OracleDestination_Package_Execution

And the execution speed –

16_1M_OracleDestination_Package_ExecutionResults

Almost an Hour for just a million records. One whole hour for a dataset containing only two columns. I had a job wherein I was transferring about 2 million records from a much larger table and we had similar requirement to capture the error records. That package ran for almost 3 hours to complete. Anything that is not set to SQL Server as destination, SSIS performs horribly.

Conclusion-

SSIS offers very poor performance for data loads which differs from SQL Server. If you want more robustness, one needs to look outside for any third-party tool. Informatica on other hand just doesn’t care what the source and destinations are. It just works as expected giving uniform performance.