Alteryx – Max Date from multiple excel files with varying schemas

As the title states, the requirement is simple – Extract max date from multiple excel files with varying schemas. I had a task where the requirement was to go through about 30 odd excel files and pick max date from each of them for monitoring purpose. All the files do not have necessarily have same schemas i.e. same number of columns or sheet name for that matter.

Just looking at the requirement, I knew this is a cakewalk for Alteryx. You see once you get a small taste of how a macro works, you will begin to realize the incredible power of the ability to customize every single component of the workflow – input tool, output tool, filter, join. Anything, you just name it, it will do it for you. It will put those tiny black thingy on the controls and hallelujah! you are set.

Alteryx Designer x64 - Macros_configuration.png

I did some simple mind calc and bam, I knew I could rig this up effortlessly.

Without any further ado, let’s get into this with an example. Like every other person out there I consume a lot, I mean a LOT of audio/movie content be it movies, series or podcasts. So I have three files now to keep track of these –

1. Movies Seen.xlsx containing the following data-

Alteryx Designer x64 - Blog_Excel_Movies_Sample

2. Series Seen.xlsx containing the following data –

Alteryx Designer x64 - Blog_Excel_Series_Sample

3.Podcasts Heard.xlsx containing the following data –

Alteryx Designer x64 - Blog_Excel_Podcasts_Sample

As can be seen all the files above are having different column names and different sheet names too. My goal is to get when was the last movie that I saw or series I saw or podcast I heard.

Here is the expected result –

Solution –

My first step was to create a lookup file – media_config_data.csv which looks like below –

Alteryx Designer x64 - Blog_Config_Data

We will make use of this later. First let’s create one simple workflow that gives the max date from one of the files above i.e. Movies Seen.xlsx. Here is the screenshot of the base workflow –

The workflow does following things –
1. Use ‘Input Data’ and connect to the excel file.
2. Add a ‘Formula Tool’ and create new column ‘Max Date’ and set it to ‘Last Watched On’
3. Add a ‘Select’ transform and deselect all the columns excepting the ‘Max Date’ and ‘File Name’ column
4. Add a ‘Summarize’ transform and set it as ‘Last Max Date’
5. Generates a dummy output

Ensure that the workflow works without any error. The next step is to create a ‘macro’ out of it. Some of ‘comments’ in the above screen shot should give an idea of what is going to happen.

Step 1 – Macro to obtain Max of Date column

Perform the following steps to create the macro. This is where the actual magic happens –
1. Drag in three ‘Control Paramters’ on to the canvas and set the labels as –
Excel Name, Sheet Name, Date Column Name.
2. Connect the ‘Excel Name’ control parameter’s magnifying glass to lightning strike of ‘Input Tool’. An ‘Update value’ action tool will popup.
3. Perform the following actions on selecting the ‘Update value’ tool as shown below –

4. Perform the steps (2) and (3) for Sheet Name control parameter and in following through ‘Replace a specific string:’ Movies.
5. Now connect the ‘Date Column Name’ control parameter to the lightning strike of ‘Formula’ tool and perform the actions as seen below –

6. Lastly remove the ‘Output’ tool and connect ‘Macro output’.
7. Save the file as ‘Obtain Max Date from Excel File’.

Here is how the macro would like like once done –

Step 2 – Workflow to process the files
This workflow is where we use Directory tool, to obtain all the excel files and get file name and full file path, join it up with config file and obtain the three variables that the above macro needs –
1. Excel Name
2. Sheet Name
3. Date Column Name
Here is how the workflow would look like –

On running the workflow, we get the desired output as shown below –

So who do you really work for?

I attended a Town Hall meeting yesterday of Genpact from who I currently contract with. It was very interesting meeting with about 70 odd people attending it. One of the things that pleasantly surprised me is the strength of non-Indian representation. Genpact is one of the mid-tier IT firms based out of in India and having worked for such consultancy firms before I expected large Indian diaspora with occasional Aussies.

This was the very case in the first company that I worked here in Australia and contracting for LINK Group, as a MindTree employee. In the annual parties that we had back then, I could count with my fingers on number of Aussies in the whole pact. So it was quite a sight to see a change.

The one difference I can see right away is whereas in MindTree there was predominant presence of developers, here I could see the opposite. This would only mean , at least to me, expansion is yet to happen for the company. There were two main speakers who gave an update of the company’s outlook, how it fared from last year, exciting new clients that they bagged this year. The company looks to be heading very strong with really good performance outlook.

Right at the end, the meeting then veered into employee feedback they obtained and what it meant to them, how they are going to address it etc. It was it this point, I was kind of zoned out.

It reminded me of all the times that I have been a salaried full-time employee, for whom talks like this used to invigorate a sense of belonging in me. Ever since I have been contracting, events like this put a different perspective on thinking.

Post the meeting, I stayed on for a while just to have a casual chat with any of the folks there. It was during that time I got asked – so who do you really work for?

That’s when for the first time, I got a sense of liberation. Being on contract, is in a way being on your own. In truest sense the answer would be I work for the client as that is my primary responsibility. The question, though, is much broader than that. In absence of allegiance, for whom am I really working?

Ever since the day I have started working, my first and foremost dedication is to the quality of work I deliver. It has to be flawless, easily scalable, extensible and most importantly well-documented. It is the work that gives me the utmost satisfaction. The one thing I have consciously decided to focus on now is to improve my technical knowledge and gain inroads into big data engineering space.

Day #2 – Data Modeling

Day #2 of my course involved getting over view of data modelling. The course started off with basic introduction courses for relational and cloud databases. The course per se was touching only on basic terms and bit underwhelming w.r.t to intro to PostgreSQL database.

Cassandra database is the next hurdle to cross and work on.

Day #1 – Getting the engine started

Yesterday marked my first day in the Data Engineer Nanodegree course offered by Udacity. After thinking a lot on how to best equip myself and enrich my knowledge in the world of Big Data and taking the steps towards it, this course came along talking about THE essential things that I wanted to learn – Cloud data warehouses, Spark and Data Lakes.

What further sealed the deal was I am working on a project where we are using Spark and Data Lake as well. However, it is being handled by a separate team. My involvement so far has been to the extent of writing Impala queries, creating data structure, testing the sqoop queries and occasionally query tuning by looking at the logs to understand which partitioning is better. I reasoned that doing this course will give me a better ammo to pitch myself to get into the Data Lake team. Time will tell (fingers crossed)

I have been longing for an opportunity to pivot my career from the traditional BI to Data Engineering on Big Data Platforms. Here is a course that not only promises to teach the nitty gritties of being a Data Engineer with a proper structured methodical teaching but also help with shaping up my career via services like resume editing and LinkedIn page setup. Long way to go for that.

So here is what my Day 1 (yesterday) felt like so far- Absolutely wonderful!
In the first few videos I have really gotten to know what Data Engineer really means and what other titles actually mean and how they stack up.

What resonated me a lot was this article that was one of the materials to read up. It spoke volumes to me as this was exactly the path I had been following all through my career. I started off writing ETL packages via SSIS on traditional OLTP – OLAP databases, designing cubes off of it, designing and developing reports based on it.

All these have stopped about 3 years ago and it was only a year ago, I am completely off it. I am now working on data sources which are disparate in nature or are built on the Data Lake. This is a brand new world for me and am loving every part of it. The challenges are different, more exciting and there is SO much more to be done.

Looking at the evolution of how data has proliferated and how the traditional RDBMS technologies are not sufficient to cater the growing needs of business, I am happy to see the organic growth in me. Of course, to be where I am today, the forces that have shaped me are largely due to the work done in BI but stepping into new future I need more ammo.

Coming back to the course, I started off with Data Modelling basics and some intro into PostgreSQL.

Next post would be more structured. The purpose of this post with # tag is to motivate myself to read every day and share my thoughts on my learning.

Alteryx App- Adding (Select All) to Dropdown

One of the most common requirement that comes when creating Alteryx Apps involving a dropdown is to have (Select All) as one of the values. This value, if you have not inferred by now, would not be part of a data source but something we add to it. Basically I am trying to simulate what Excel does when ‘Sort & Filter’ is enabled as seen below – 

2019-02-07 11_39_50-Marvel Movies Box Office Report.xlsx - Excel

In this post I am going to demonstrate how to add this value in the drop down and how it then needs to be consumed in filtering the datasource. This post will have the following sections –

  1. Get Dummy Data
  2. Add (Select All) to Dropdown tool
  3. Filter dataset using the ‘(Select All)’

1. Get Dummy Data

The test data which I am going to use for this post is the highest grossing Marvel Movies which I am getting from this link – Marvel Comics Movies at Box Office. I copied the first table and stored it in my local disk. In my Alteryx App, I have dragged this data as a Input Tool. Used DateTime Tool and Filter to create a unique Date Format.

Sample Data is shown below –

2. Create Filter with (Select All) in it

As next step I am going to create one Dropdown filter – ‘Studio’. For this post I am choosing the option – Manually set values (Name:Value – one per line). I have entered the values manually. As you can see I have kept the option ‘(Select All)’ as the first entry. 

2019-02-07 11_55_37-Alteryx Designer x64 - Test Workflow - Blog.yxwz_.png

If the data is coming via connected tool, ensure that source data for the filter is joined with the ‘(Select All)’ manual text data. Likewise, if it’s coming from external source, add this entry. Essentially because this value doesn’t exist, we would need to add it.

3. Filter dataset using the ‘(Select All)’

We identified our column on which we need to filter. We have created the values for the filter. It is now time to actually ‘filter’. I have now dragged in a Filter transform. Too much repetitive, isn’t it. I will now ‘filter’ (pardon the pun) going forward. Put the following entry for the ‘Customer Filter’ option as shown below –

2019-02-07 12_05_27-Alteryx Designer x64 - Test Workflow - Blog.yxwz_.png

Here is what we need to do next –
1. Connect the ‘search’ icon of ‘Studio’ dropdown to the lightning bolt of ‘Filter on Studio’ Operator. A ‘Update value’ operator pops up in between.
2. Click on the operator. Under the ‘Value or Attribute to Update:’ section, click on “Expression – value = ….”
3. Click on the checkbox ‘Replace a specific string:’ and just keep the value as ‘<studio>’ (without quotes) as can be seen below- 

2019-02-07 12_14_02-Alteryx Designer x64 - Test Workflow - Blog.yxwz_.png

That’s it. Essentially the logic here is the inclusion of OR expression of place holder value equal to ‘(Select All)’. When a specific filter is selected, the select all expression goes false and when ‘(Select All)’ option is selected, well this one gets true and essentially the filter will flow through all the data.

Same logic can be used for filtering the data say in a Dynamic Input with data coming from a particular SQL data source. Code for that would look like this – 

select *
from  really_awesome_table2019-02-07 14_18_14-Alteryx Designer x64 - Select All - Filter implementation.yxwz.png
where (
           that_parameter  = ‘<this_value>’
            or
           ‘(Select All)’ = ‘<this_value>’
          )

To test it out I have put in a summarize transform to group by ‘Studio’ and ‘Total Movies’. Let’s run the workflow.  Here is how the wizard looks like –

2019-02-07 14_20_04-Alteryx Designer x64 - Select All - Filter implementation.yxwz.png
Here is how the output looks like, executing with (Select All) and individual Filter –



IntelliJ – New Project – Scala not appearing

Have you come across the problem of not being able to select ‘Scala’ as part of the ‘New Project’ in IntelliJ as shown below (psst – there is Scala present, I know. For imagination sake let’s say you don’t see it, kapise?)  ?

IntelliJ-New Project-Window.png

First thing when you google, everything it points out is for you to installing ‘Scala plugin’.

Let’s say you do have the plugin installed and available and enabled as shown below but even then you don’t find it in the ‘New Project’ –
IntelliJ-Plugins-Scala-Enabled.png
This usually happens say when you have upgraded your IntelliJ or re-installed your IntelliJ with a different version at different time and have imported the old settings. The best thing to do to get over this error is navigate to the folder – C:\users\<username>\ and delete all the folders starting with .IdeaICxxxx as highlighted below –
IntelliJ-User Folder-Delete.png

That’s it. Now open up your IntelliJ again, all should be good now to get going. Of course this time around you would need to get your JDK and Scala libraries set up. Follow this blog – link

 

Alteryx – PostgreSQL – Out of memory while reading tuples

While working on one of the workflows of Alteryx with PostgreSQL data sources, I started getting this wierd error –

Error SQLExecute: Out of memory while reading tuples.; memory allocation error???

This was a unusual error. For one I was running this not on my local machine but on the Gallery which was governed by the server. Also I felt Alteryx folks could have done away with three question marks (???). It feels as if we are being questioned on the memory allocation and it’s just waiting for some confirmation.

This is one of those classic it-works-in-dev-not-in-prod kind of problem. I then went back and compared the two ODBC data source settings across both the environments. It was here I could see the problem. You need to set the setting for PostgreSQL ODBC data source to have ‘Use Declare/Fetch’ option set in.

Open up the ‘ODBC Data Source Administrator’ window after running it as administrator. Click on the ‘Data Source’ that you are using and click on ‘Configure’. In the ‘PostgreSQL ANSI ODBC Driver (psqlODBC) Setup’ window, click on ‘Datasource’. Ensure that the checkbox ‘Use Declare/Fetch’ is checked and then click on ‘OK’ and close the subsequent windows. This setting resolves the issue. Below is the screenshot

PostgreSQL data source setting

 

Running dynamic Powershell script using Alteryx

Figuring out how to run Powershell scripts using Alteryx has been a very harrowing experience for me. I had used ‘Run command’ tasks before but I remember even then it was quite a struggle to get it up and running.

The help files from Alteryx with regards to this task is abysmal. It gives an explanation of each task within the command but doesn’t really tell you how to go about doing it step by step. The nearest help from Alteryx forum that I could get is this. Here is my attempt to make it easier for folks who find themselves in similar situation.

Task Description
I need to create a sub-folder under multiple folders. This ‘sub-folder’ name is dynamically determined during run-time depending on the input file name to the workflow.

For example, say my input file is – movies_released_20180801.csv. I have a set of folders under the parent directory – Movies as shown below –
Powershell_Input_Folder_Structure

I need to create a folder called ‘20180801’ under each of the sub-folders, if it doesn’t exist already. As each month passes, new sub-folders would need to be created.

Solution
My first step for the solution is to create a powershell script that can perform the sub-folder creation. Here is my script –

#Get Parent directory
$root = "C:\Users\kvankada\Documents\Personal\Learning\output\Movies"

#Get all directories present in the parent folder
$folders = Get-ChildItem -Path $root -Directory

#Iterate through each item within the list of parent folder
foreach($folder in $folders)
{
  #Get the subfolder full path
  $Subfolder = $folder.FullName + "\20180801"

   #Check to see if the folder exists
   if(! (Test-Path -Path $Subfolder))
   {
      #If it does not exist, create new sub-folder under each item
      New-Item -ItemType directory -Path $Subfolder
   }
}

The root folder of the above script file comes from the Alteryx workflow. The next steps now starts looking crazy but bear with me. It will all make sense. Drag in a ‘Input Task’ with following two columns –

Path – C:\Users\kvankada\Documents\Personal\Learning\output\create_subfolders.ps1
Input – C:\Users\kvankada\Documents\Personal\Learning\input\movies_released_20180801.csv

The Path should be a valid path and accessible to the user. It can be a UNC path as well. Drag in a ‘Formula’. Create two new columns Subfolder and Powershell_Script with following values as shown below –

Powershell_Formula_Task_powershell

As can be seen the new column Powershell_Script is nothing but the entirety of powershell script with ‘Subfolder’ being dynamically sent. Drag in a ‘Select’ task. De-select all the fields excepting ‘Path’ and ‘Powershell_Script’. Drag in a ‘Run Command’ task. Click on ‘Output’ under ‘Write Source [Optional]’ in the configuration window as shown below –
Powershell_Run_Command_Task_Write_Source.png
In the ‘Write to File or Database’ file, click on the drop-down and select ‘File’. A ‘Save a data file’ window pops up. Here chose any location and save the file as say ‘temp_powershell.csv’. I have chosen a ‘temp’ folder as my file location. In the ‘Alterx Designer’ the following changes still needs to be done against following settings –
Delimiters – \0
First Row Contains Field Names – Unchecked
Quote Output Fields – Never
Take File/Table Name From Field – Tick the checkbox. Chose the option as ‘Change Entire File Path’
Field Containing File Name or Part of File Name – Path
Keep Field in Output – Unchecked

The final dialog would look like as shown below. Changes have been highlighted. Click on ‘OK’ post completion –
Powershell_Run_Command_Task_Write_Source_2
The reason we need to do this is because Alteryx doesn’t provide a way to save the output to Powershell script file. We need to get the content of Powershell first saved as CSV and then the whole file being renamed via our ‘Path’ value that we have provided in the workflow.  It’s a roundabout way of doing things.

Continuing the ‘Run Command’ Task configuration and set them as shown below.
Pay particular attention to double quotes in the ‘Command Arguments [Optional]’ section. It is absolutely necessary.-
Powershell_Run_Command_Task_Final

That’s it. Now you are set to run. There is no need to configure the output of ‘Run Command’ task. If you do need to use, then ‘Read Results’ path needs to be configured. I might do a continuation post later on for that. For now you can ignore the need for additional output configuration. Here is how the final workflow looks like –

Powershell_Workflow_Overview_Final.png
I ran the workflow and here is my output –
Powershell_Final_Output
For final workflow in the ‘Run Command’ task, you would need to set ‘Run Minimized’ and ‘Run Silent’ check boxes to have smoother run. As you are testing out, it would be important for debugging purpose to leave them unchecked. In that manner you would see if you have any errors popping out.

That’s it. That’s how it is done in Alteryx. It took me quite a lot of tinkering and experimenting to get the desired result. Hope this post has helped in solving your own problems while getting Powershell scripts run via Alteryx.

 

Alteryx InboundNamedPipe::ReadFile: Not enough bytes read error – Postgres

Yet another strange day to be in. I kept getting the following error –

Error – The Designer x64 reported: InboundNamedPipe::ReadFile: Not enough bytes read. The pipe has been ended¶

I tried several methods – clearing of temp files, restarting alteryx, restarting my system multiple times, logging on and logging off etc to no avail. It just kept failing.

What added to the confusion is, it was happening only for few tables whose volumes are much smaller in size. I mean to say Table 1 with about ~20M records from the same database, we were able to extract successfully whereas from Table 2 with only ~3.5M we were facing this problem. Seemed really strange. The good old internet hadn’t turned up with anything useful and neither the Alteryx Forums. All the info that I got was that during one stage of workflow , it was running into error.

The case for me though is in my workflow apart from the ‘Input Tool’ from where I was pulling in the data and the ‘Output Tool’, there was nothing in between. Still it was failing.

Anyway I just took a break, thought for a while and said to myself – “When was the last time you had such strange error with Alteryx and Postgres combo?” Oh yeah, right here.

Hmm, why not try the same fix? I sure did and you know what! It just god darn works! Don’t even ask me how or why, it works. So gents and ladies, here is the fix you gotta do –

Go to the, Pre SQL Statement of your Alteryx ‘Input Tool’ and insert the following –

set client_encoding to ‘UTF-8’

As simple as that! Thank me later.

Alteryx – Extract data from various excel files with various tab names

It was one of those challenging tasks that had to be done and I felt it’s good to blog about it as well.

Task –
Extract data from a limited range of a sheet from various excel files. Each of the files have various tabs in it and the names of the tabs are not unique.

For instance, let’s say I have three excel files with the following name format – MonthlyReport_<<Month>>.xlsx where the Month is for Dec, Jan and Feb.
The data within my ‘Dec’ file is as follows –
1_srcExcel_Dec

Nice, well formatted data. Let’s look at Jan’s data –
1_srcExcel_Jan

Oh, oh! somebody left a comment in the Column E. Let’s now look at the Feb’s data –
1_srcExcel_Feb
Aargh..Another person left another comment this time at Column D.
Now my task is to get all the data from Columns A and B only across all the excel files with the differently named tabs.

Solution-
The first step for us is to get all the sheet names from each of the excel files. To achieve this we need to use a macro and the steps are listed below –

  1. Open a new workflow. Drag in a Input tool and connect it to one of the excel files and choose the option – <List of Sheet Names> and also set the option for ‘Output File Name as Field’ to ‘Full Path’ as shown below. Setting these two options will result in two fields – FileName, Sheet Names as the output –2_ListExcelSheet_InputData
  2. Add a ‘Formula’ transform. Set the ‘FileName’ with the ‘Query’ that’s needed as shown below. The purpose of the below query is to replace the ‘List of Sheet Names’ with the actual Sheet Name and in addition to that the ‘Range’ of data to be fetched. If we don’t specify this and just stick to ‘Sheet Name’, then the workflow would fail as the data is non-uniform. This is important to take note of. I am pasting the query for easy reference. Keep in mind of the tilde character –
    Replace([Filename], “<List of Sheet Names>”, “Select * from `“+[Sheet Names]+”$A1:B100000`“)
    3_ListExcelSheet_ForumlaTool
  3. Drag an ‘Macro Output’ and connect it to the ‘Filter’ transform.
  4. Drag a ‘Control Parameter’ on to the workflow. Connect the ‘search’ icon of Control Parameter to ‘lightning’ icon of the ‘Input Tool’. Doing so you would get an ‘Update Value’ transform in between both. Just leave it as is. By default it would select the ‘File – value’ as shown below. The purpose of dragging in the Control Input is to create a placeholder through which multiple excel files can be passed. If you are coming from SSIS background, you can say this is the equivalent of ‘ForEach’ file transform where the full file path is passed as a parameter.
    4_ListExcelSheet_UpdateValue
  5. Save the workflow as ‘List of sheets Macro.yxmc’ and the full workflow would look like as shown below. (Note- I have added ‘Annotation’ for Control Parameter, Input Tool and Formula Tool) –5_ListExcelSheets_Completed.png

Now we have a macro which can give the full path of the excel file along with the query that we need to get the data from. We now need another macro where we can put the query in and get the data. Remember anytime we need looping, macro is the way to go. The first step we did was just getting the sheet names from each excel file and modifying the file path with the query.

The second macro to do is fairly simple –

  1. Open a new workflow. Drag in a ‘Input Tool’ and connect it to one of the source excel files. In my case I connected it to the ‘Feb’ file. When you connect to the file, here is how ‘Choose Table or Specify Query’ window looks like –
    6_ObtainDataFromExcel_ChooseTable.png
  2. Do not choose any sheet name rather click on ‘SQL Editor’ and paste the following query and click ‘OK’ – SELECT * FROM February$A1:B100000
    as shown below-7_ObtainDataFromExcel_SQLEditor
  3. Drag in ‘Filter’ Transform and connect to the ‘Input Tool’. In the properties, select the column ‘Id’ and set the drop down to ‘Is Not Null’. Basically if you look at the query in the above step I have given arbitrary number B100000. The data may or many not be there. So we need to filter out empty data.
  4. Drag in an ‘Macro Output’ and connect it to the ‘Filter Transform’ above.
  5. Drag in a ‘Control Parameter’ and connect the ‘search’ icon of transform to the ‘lightning’ icon of the ‘Input Data’ transform. An ‘Update Value’ transform would appear in between and you don’t need to do anything. By default it will transform the ‘File – value’ as shown below – 8_ObtainDataFromExcel_InputData.png
  6. Save the workflow with name say – Obtain limited range from excel.yxmc. Here is how it would look like – 9_ObtainDataFromExcel_Overall

Okay, so we now have two macros, one for getting sheet names and modifying the full file path with the requisite query, the other for getting the data from the excel file. We now need another workflow to call these two macros and do our job.

  1. Open a new workflow. Drag in the directory tool and point the location to the path where the source files reside. Set the file specification as ‘.xlsx’ just so that we only get the excel files
  2. Drag in an ‘Formula’ transform and connect it to the directory tool. Add new column and give it a name say ‘FullPath_SheetNames’ and give it the following value – [FullPath]+”|<List of Sheet Names>”.
  3. Right-click on the blank space in the workflow, go to ‘Insert’->’Macro’ -> ‘List of Sheets’ as shown below- 10_ObtainDataFromExcel_InsertFirstMacro.png
  4. Connect the macro to the ‘Formula’ transform. In the ‘Properties’ box of the macro, set the ‘Choose list of sheets input field’ as ‘FullPath_SheetNames’. (Note- in case you have not done any annotations in the macro, then it would appear as Control Parameter Input’
  5. Drag in a ‘Filter’ transform and connect it to the above macro. Set the filter as – [Sheet Names] != “Summary”. This step I am doing because each of our files have two sheets and the sheet that we are interested in is not the ‘Summary’ one. I could have just put in one sheet in all files but I wanted to show you how you can if need be utilize all the sheets separately for different purposes.
  6. Right-click on the blank space in the workflow, go to ‘Insert’->’Macro’ -> ‘Obtain Limited range from Excel’ macro and connect it to the ‘Filter’ transform.
  7. In the properties of the second macro, set the value to ‘FileName’.
  8. Drag in a ‘Browse’ transform to look at the output. (here is where you would typically plug-in your Output Data tool in the real world scenario)

11_ExtractData_Overview

Post-running the workflow here is the output from Browse –
12_ExtractData_Output

That’s it. For my task I had to go through about 12 different excel files with various tab names and the out-of-the-box tools that Alteryx provides is so very powerful to get the job done.

Just falling in love with it.