Visual Studio – “Build must be stopped before the solution can be closed” error

Time and again I have been getting this nasty error for no obvious reason as seen below –

BuildMustBeStopped

The first step I tried is to stop the build by going to – Build-> ‘Cancel Build’. That just doesn’t work. So here are the scripts I run to close my session –

$p = get-process devenv
Stop-Process $p -WhatIf
Stop-Process $p

I first get the process id in the variable. Check to see if what I need to close is what I intend and then proceed to stop it. That’s all there is to do. You now need to open a new instance of Visual Studio and get cracking on whatever it is that you are working on.

SSRS Report Migration 2012+ – Method 1

As Microsoft goes on rampage shortening their release time and sending not one but 3 versions in the last 6 years (2012, 2014, 2016!), it is time for us to have a look at various methodologies available to migrate reports from one ReportServer to another ReportServer. The reason I have brought this up is because for all my working career, the only tool that I had used to migrate reports is RSSScripter.exe. The original link is archived and is available from the following link – RS Scripter. This one works good for till SQL Server 2008 R2. What about the versions after that?

That’s the answer I want to solve through this post. In my quest to find out how to build a a package ready for deployment and further automating it, one of the first that I have got hands on is this – SSRS Powershell Deploy.

It’s bunch of powershell scripts that can be utilized to deploy the reports. The biggest downside to this method though is you can’t pick and chose the reports to deploy. You either deploy all or none at all. If the solution can be analyzed well enough, we may come up with a workaround for that. For now, let’s deep dive into, how to set the solution up and steps involved in deploying –

Once you open the link, click on ‘SSRS-1.3.0.zip’ folder. Follow the below steps –

1. Download the .zip from https://github.com/timabell/ssrs-powershell-deploy/releases/latest
2. Right-click the zip file in windows explorer, click “properties”, and then click “Unblock”.
3. Create folder ‘Documents\WindowsPowerShell\Modules\’
4. Open up the zip file, copy the SSRS folder, paste it into
`Documents\WindowsPowerShell\Modules\`. (Or somewhere on your
`$env:PSModulePath`)

You can test if you are having the modules imported or not by running a simple powershell command as shown below –

1_SSRSPowershellImport.png

As the names imply, if you want to deploy the SSRS Project (i.e. .rptproj file), then use Publish-SSRSProject else if you want to deploy the SSRS Solution (i.e. .sln file) then use Publish-SSRSSolution.

Using this let’s do one sample report deployment. I will first show a simplest way to deploy i.e. by using pre-filled Configuration data.. I have a project OpenDataReports containing one report ‘Top 10 Products.rdl’.

Once the solution is opened, go to Project-><<ProjectName>> Properties. The properties page opens up.
Perform the following changes as shown below –
Configuration – Set it to Release
TargetServerURL – http://<<MachineName>>/ReportServer<<_InstanceName>&gt;
In addition to that the other settings can be set as well, i.e. TargetDatasetFolder, TargetDataSourceFolder etc.

2_SSRSProjectProperties.png

With the properties set, now right-click on the Project and click on ‘Rebuild’ as shown below. Ensure it succeeds. –

3_SSRSProjectRebuild.png

Go to the project path and /bin/Release folder to ensure all the required reports and Data Sources are present. In my case here are the contents in the folder –

4_SSRSProjectbinReleaseContent.png

Finally the next step is to run the powershell command as shown below –

The command to run is a simple one –

Publish-SSRSProject -Path "<<FilePath>>\OpenDataReports.rptproj" -Configuration Release -Verbose

5_SSRSProjectDeploy.png

That’s the simple way to go about deploying reports. Of course, one need not do the pre-config settings, if you look at all the parameters provided for the script, each of the project parameters can be set at run time. I will posting a example of that in the next post.

Meanwhile, do leave a comment on how deployments are happening in your environments.

 

 

Bye Bye SQL Profiler – Welcome Extended Events

SQL Profiler for long has been THE go to tool for tracing the queries and if you are a SQL Developer it would be a miracle if you haven’t used it at all. In every facet of BI stack, this comes into play be it a SSIS package that is currently run, or in understanding a blank SSRS report or a Cube that is getting processed at the background one can hook up a trace as the first line of debugging.

The biggest problem with this though is that it ALWAYS had to be used within a limited time. You extend it longer than intended and all the activities on the SQL Server tend to slow down as it is a resource-intensive operation.

Sensing this I believe, Microsoft first came out with Extended Events with SQL Server 2008 version. It was horrendous to say the least. At that time this had to be entirely done through bunch of scripts, joining multiple tables with addition of XQuery to grab the actual data that we need. The learning curve to get this done was huge. I admit I had read various tutorials, did some practice but when I really wanted to use it I used to get cold feet. Without googling at least twice, this was a no-go and I used to fall back on SQL Profiler.

SQL Server 2012 onward, Microsoft has introduced GUI for Extended Events making it now a real easy breeze to work with. This has now been the de-facto tool that I use for tracing the queries now.

Here is how one can go about setting one up and using it to trace your SQL queries –

Log on to SQL Server and go to Management->Extended Events->Sessions. Right-Click on ‘New Session Wizard’ as shown below –

1_Opening_NewSessionWizard.png

Click ‘Next>’ on the Introduction screen. In the ‘Set Session Properties’ tab give a name to the Session say – All SQL Queries as shown below and click on ‘Next>’

2_SetSessionProperties.png

In ‘Choose Templates’ tab click on ‘Use this event session template:’ and in the drop down select ‘Query Batch Tracking’ as shown below and then click on ‘Next >’ –

3_ChooseTemplate.png

In the ‘Select Events to Capture’ tab across the ‘Selected events’: you can remove error_reported, rpc_completed and only have sql_batch_completed in it as shown below and click on ‘Next >’. You can remove the other two that come by default by just clicking on them and using the ‘<‘ arrow button –

4_SelectEventsToCapture

Keep the defaults as they are for the subsequent screens and click on ‘Finish’ in the ‘Summary’ page. In the ‘Create Event Session’ page that pops-up, to start seeing your results immediately, you are provided with two options. Enable them and click on ‘Close’ as shown below –

5_CreateEventSession.png

Upon doing that the trace is now up and running waiting for the queries to start. Here is the output of showing one query that was run against one database in the server –

6_QueryResult

As can be seen this one is more cleaner and more easier to read without the unnecessary redundant information that used to come along from SQL profiler.

This is just a tip of the iceberg from what we can achieve using Extended Events. There is an ocean out there to explore but for those who are seeking to trace out a query this should be a good start.

 

 

 

 

SSIS Lookup Gotcha – Test Fully

The task I had on my hands was simple. I had to perform lookup on a target table with my source data and get all non matched data. The data that I was fetching for the lookup from target had additional string manipulation done to obtain the required lookup data. I then went on to write a query to fetch that data and testing on a sample subset (using TOP 10) to check for it’s correctness and went on to implement.

I set everything up and when I ran the package the lookup was just now working. The rows that I expected to have a match were simply getting redirected to no-match. When I started debugging, it then became apparent that the query that I had used to retrieve the dataset in the lookup was an incorrect one. This time around I ran the query without the subset and the query failed to get executed showing me the actual error.

Moral of the story is one should not rely on ‘Preview’ results offered from the ‘Lookup’ transform or use a subset when sampling the data. It should be checked to see if the query works for the entire set.

Let me illustrate this with an example.

Below is a sample of Source Data –

 

FileName
Test1.txt
Test3.txt

Here is the data that I am doing a lookup on from a table say dbo.LookupTable (a simple Id and FilePath column) –

1_TargetDataSample
The string manipulation query for this is as you might have already guessed, to get FileName from the ‘FilePath’.

select Id,RIGHT(FilePath, CHARINDEX('\', REVERSE(FilePath)) -1) as 'FileName'
from dbo.LookupTable;<strong>
</strong>

I have a simple Data Flow Task which does the following –

  1. OLE_SRC – Get Source Data – Gets the source data as shown above.
  2. LKP – Get Id – Using the query above, it fetches the Id and FileName. Note that the moment you put that query and go to ‘Columns’, it will throw up an error. For the purpose of this illustration just ignore it and do the mapping.
  3. Trash Destination – A priceless open-sourced transformation from Konesans and a must  have development aid.

Here is how the package looks like after running it-

2_DataFlowTaskContents

As can be seen only 1 row gets shown as matched even though there are 2 matching rows.

The data in this sample illustration is very small but the original data that I had about 100k records and it was difficult to debug on why this error occurred.

 

Informatica 101 – Where are my Foreach loops? – 1

One of the most used transformation that I use as part of SSIS is the Foreach loop which *gasp* just isn’t there in Informatica out-of-the-box. In this post, let’s look at how it’s done.

Test Data


I have three files named StudentsData001.txt, StudentsData002.txt, StudentsData003.txt. In each of the file, there are two columns – Id, Name such as

Id,Name
1,Karthik
2,Pratap

Our goal is to integrate all the data from all the three files into a table Students which has two columns Id and Name.

Steps to be done


Open the PowerCentre Designer. Go to Source and click on Import from file as shown below –

1_SourceImportFlatFile

Chose one of file from the source folder and in the next window, ensure to check on ‘Import field names from first line’. –

2_FlatFileImportWizard

Accept the defaults and complete the wizard by setting the appropriate size for the ‘Name’ column. Once the source is set-up, we would need to set-up the destination. Go to ‘Tools’->’Target Designer’ and then go to – ‘Targets’ -> ‘Import from Database’ as shown below and connect to the ‘Students’ table where you want all the data to be imported-
3_CreateDestination

Go to ‘Tools’ -> ‘Mapping Designer’ and then go to – ‘Mappings’->’Create’. Drag and drop both the source and destination on to the mapping designer and then connect the source data on to the destination as shown below for ‘Source’ –
5_Mapping_DragAndDrop

Connect both the columns from ‘Source Qualifier’ and connect to ‘Destination’ by dragging and the dropping the columns.

6_Mapping_JoiningDestination.png

Once it is set, generate the workflow by going to ‘Mappings’ -> ‘Generate Workflow’. Just follow the wizard and click on ‘Next’ till it finishes.

7_WorkflowGeneration.png

What we have done so far is a basic set-up just like in SSIS when we do the DFT containing the Flat File Source to the OLE-DB Destination (or whichever destination). The actual work of looping is now to be done at the session level. Even before we get to that, what we need to remember is – Informatica has two modes of handling Flat Files – Direct and Indirect Mode. We need to use Indirect Mode to handle the multiple files list.

First thing is we need to create a file containing the list of all the file names in it. I created a file called ImportFileList.txt as shown below –

10_IndirectFileList.png

Open the PowerCentre Workflow. Open the worfklow wf_m_FF_To_Database and then double-click on the session – s_m_FF_To_Database. ‘Edit Tasks’ window would open up as shown below –

11_SessionSQAttributeSetting_0.png

Go to Mappings tab and under ‘Sources’ click on the ‘SQ_StudentsData001’. In the bottom most window set the following properties –

  • Source filetype – Indirect
  • Source file directory – Path of the directory where the InputFileList.txt is present.
  • Source filename – ImportFileList.txt

11_SessionSQAttributeSetting.png

That’s about it. Those are the basic changes that are to be done. Save the workflow after performing the above changes. Also ensure, under Connections tab the , Students – DB Connection connection name is valid.

The next step is to start the workflow by going to – Workflows -> Start Workflow. Once it is successful you would see that all files are processed and the ImportFileLIst.txt is no longer existing, The results post-processing are shown below –

12_PostWorkflowResults.png

Next Steps –

  • How do go about automating this file generation file? Am guessing we would need to have a command task in workflow reading the files and generating the file.
  • How does it change from DEV to say UAT i.e. basically how do we localize it?

 

 

 

Import and Export Wizard – Handling Dates in Flat Files

As they say there is first time for everything. Having worked on so many packages all through my career never once I had a need to import a flat file containing dates directly from SQL Server Import and Export Wizard. It also may be a case of not having the data type set as ‘Date’ in the column and rather take it as a varchar value when doing imports.

So today as part of some task I had a requirement to import data with some date columns in it. Let’s say data for my column data looks like this in a file called DateTest.txt. Note that there is a row with blank data –


StartDate,EndDate
2017-01-01,2017-01-01

2017-05-01,2017-05-01

So I opened up the SQL Server Import Wizard, set the Data Source as – Flat File Source and browsed and obtained the File as shown below –1_SQLServerImport_General.png

Now go to Advanced and set the properties for both the columns StartDate and EndDate as ‘DT_DBDATE’ which translates to date datatype of SQL Server. For more info refer to – link. The screenshot below is for ‘EndDate’. Do the same for StartDate column as well. 2_SQLServerImport_Advanced_SetDataType.png

Set the Destination to your local Database say RnD as in my case as shown below –3_SQLServerImport_Advanced_SetDestination.png

In the ‘Select Source Tables and Views’, leave it as is and click on ‘Next>’ (This will create the table by default). Leave the defaults as is in the ‘Save and Run Package’ screen, leave the defaults and click on ‘Next>’. Click on Finish in the last page of the wizard. You will see the ‘Operation Stopped’ with ‘Copying to [dbo].[DateTest]’ set to error as shown below.

4_SQLServerImport_EndOfWizard_OperationStopped.png

If you dig further in the Messages, here is what it throws up the following error-

5_SQLServerImport_EndOfWizard_OperationStopped_ErrorMessage.png

The error states –

An OLE DB record is available.  Source: “Microsoft OLE DB Provider for SQL Server”  Hresult: 0x80004005  Description: “Conversion failed when converting date and/or time from character string.”.
 (SQL Server Import and Export Wizard)

The blank values are treated as strings and that is what the error states.

Solution 1 –

Instead of setting the data type as ‘DT_DBDATE’ set is as ‘DT_DATE’ then it will pass. There are two side-effects to this –

  • The destination column would be of type datetime instead of date.
  • All the blank values will be set as ‘1899-12-30 00:00:00.000’ as can be seen below6_SQLServerImport_Solution1_Result.png

It’s not a optimal solution. If you are just importing one file with limited columns of such type and if the destination  table on which are you are importing isn’t a large table then we can go with this approach. Depending on the use case, one can then proceed to either update the values as NULL or leave it as is.

Solution 2 –

This involves creating a package out of the same operations. Now one can go about it in the traditional way i.e. open SQL Server Data Tools (Visual Studio), add new package, drag and drop a DFT, yada yada.

Instead let’s replicate the same behavior as before. How you may say. Did you know that one can fire up a ‘SQL Server Import Wizard’ from SQL Server Data Tools itself? Now before we go further, if you have been following along, the table ‘dbo.DateTest’ in your destination should be existing.

Open a SSIS Project and go to ‘PROJECT’->’SSIS Import and Export Wizard…’ as shown below –

7_SQLServerImport_Solution2_VisualStudioProject.png

The wizard looks exactly the same as fired from SQL Server. At the end you will notice the difference. Follow the same steps that you have done earlier i.e. by setting the data as ‘DT_DBDATE’. Instead of executing, it creates a package and the final window would look like this. It will create a new package called Package1.dtsx if there isn’t one already. If there is one it would create Package2.dtsx. –

8_SQLServerImport_Solution2_VisualStudioProject_EndResult.png

At this point if you run the package that gets generated automatically as is you will get the same error. Here are the changes that are to be done.

Open the Control Flow Task – ‘Data Flow Task 1’. In the Data Flow task, open the task ‘ Source – DataTest_txt’. Ensure that ‘Retain null values from the sources as null values in the data flow’ as shown below –

8_SQLServerImport_Solution2_VisualStudioProject_RetainNull.png

Secondly double-click on the ‘SourceConnectionFlatFile’ connection manager, go to Advanced and modify the data types of StartDate and EndDate to DT_STR and length 10. Below image is shown for EndDate. Do it for StartDate as well.

20_ImportExport_AdvancedConnectionManager

Since the source connection manager is changed, the DFT – ‘Source-DataText_txt’ needs a change. Double-click on the DFT and you will be presented with the changes as shown below. Accept them.21_ImportExport_RefreshFlatFileConnectionManager

Delete the connector between Flat File and the OLE DB Destination and drag in a Derived Transform in between them. Add the following two expressions as shown below –

  • Derived Column Name  – DC_StartDate ; Expression – StartDate == “” ? NULL(DT_DBDATE) : (DT_DBDATE)StartDate
  • Derived Column Name  – DC_EndDate ; Expression – EndDate == “” ? NULL(DT_DBDATE) : (DT_DBDATE)EndDate

 

17_SQLServerImport_Solution2_DerivedColumnTransform.png

Connect the Derived Transform output to the OLE DB Destination and set the mappings with the newly transformed columns. 24_ImportExport_OLEDBDestinationMapping

In addition to that set the ‘Keep Nulls’ property to yes.

23_ImportExport_OLEDBDestination

That’s it. Now execute the package. All the three records would now get successfully

25_ImportExport_ExecutionResult

Data gets transferred with the blank values retained as NULL values as shown below –26_ImportExport_ExecutionResultTable

To summarize the solution, basically out-of-the box Import/Export wizard will not work with getting NULL values to date columns. Here are the changes to be done –

  • Set all the date columns for which you would want to retain NULL for blanks as String values
  • Add a Derived Transformation to change the data type of the data to DT_DBDATE.
  • Set the retain null property at both source and destination to yes.

Solution 3 –

The ideal solution should be the one wherein one can use DT_DBDATE at the source itself and it should go through to destination. For some reason I have been getting strange errors while doing it as shown below –

Error:Year, Month, and Day parameters describe an un-representable DateTime.

. I am still working on it. Once I get a better solution, will post it here.

 

 

Informatica 101 – Staring the first mapping

At this point I am assuming you have been able to successfully install the Informatica 9.6.1 legally by following the steps listed out in this article – link. 

First let’s define the problem statement.

Task –

Load data from a table into a flat file. The table data that I am loading is from the good old AdventureWorks . Here is the query which I have stored in a view – Person.vw_GetTop10Persons –

select top 10 BusinessEntityID
,FirstName
,LastName
from Person.Person
order by BusinessEntityID;

Steps –

Open the PowerCenter Repository and connect to the the Repository that you have created. Go to ‘Folder’->’Create’ and give it a name say ‘InformaticaLearning’ and click on ‘OK’ as shown below

.CreateRepositoryFolder.png

Click on Start and open up the PowerCenter Designer and connect to the ‘Repository’. The ‘InformaticaLearning’ project that we created in the previous step would now appear. Right-Click on the folder and click ‘Open’ as shown below. This is how a project is opened                                 Opening project from Repository

Unlike SSIS wherein you drag a Data Flow Task and within which you start creating source and destination, the process here is to create sources and destinations as separate entities. First step is to create a Source. Go to Sources->Import from Database as shown below –

SourceImportDatabase

In the ‘Import Tables’ window, click on the ellipsis button. The ODBC Source Administrator(32-bit) window pops up. Click ‘Add’ and in the ‘Create New Data Source’ Window scroll down to the appropriate SQL Server client and click on ‘Finish’. I have SQL Server 2014 as well as SQL Server 2008R2 and the driver that I am choosing is SQL Server Native Client 11.0. as seen belowCreatingSourceFromImportTables.png

The following screens give an indication of the next steps in creating the data source.
NewDataSource_1NewDataSource_2                                     Ensure to change the default database to the actual  Database from which you are sourcing the data which in this case is – AdventureWorks2014NewDataSource_3.png

NewDataSource_4

NewDataSource_Finish

The ‘User Data Sources’ should now contain ‘AdventureWorks’. Click on ‘OK’. You will again get back to the original ‘Import Tables’ window. Here you would need to select the source again from the drop-down and click on ‘Connect’. All the tables would appear in the ‘Select tables’ window.ImportTables_Tables.png

In the ‘Search for tables named:’ section type the name of the view – vw_GetTop10Persons and click on ‘Search’. The narrowed result will now appear. Now click on ‘OK’ as shown below – ImportTables_Tables_2

The ‘Source’ is set. Now let’s create the destination. Go to Tools->Target Designer. Now go to ‘Targets’->’Create’ as shown below. – Targets_Create_NewTarget.png

In the ‘Create Target Table’ window, enter the name for the target say ‘Top10Persons’ and for ‘Select a database type:’ select it as ‘Flat File’ and click on ‘Create’ and then press ‘Done’ once done.

Targets_Create_NewTarget_2.png

You now need to define the column details. Double-Click on the newly created table and define the column details. You may be wondering where to put the file path right? It’s not done here. It is done at the ‘Workflow’ level – Targets_Create_NewTarget_3.png

Now go to ‘Tools’ -> ‘Mapping Designer’. Click on ‘Mappings’->’Create’ as shown below and give it a name m_SQL_Persons_To_FF and click on ‘OK’

Mappings_Create

In the ‘Mapping Designer’ pane, drag and drop the source – vw_GetTop10Persons on to the ‘Designer’ as shown below just same as how we drag and drop Data Flow Components. You would see that the source would appear along with additional ‘SQ’ block. This is called ‘Source Qualifier’. This is basically used as a stop for homogenizing all the source data. Every source when dragged gets a source qualifier associated. We will see more on that later on.  Here is the window of the Source along with the Source Qualifier – Mappings_Create_2Mappings_Create_3

Right-Click on any column in the Source Qualifier and click on ‘Select All’  Mappings_Create_4

Click on the Square box beside the column ‘BusinessEntityID’ in the selected list and drag it to the exact column ‘BusinessEntityId’ at the destination as shown below.Mappings_Create_5

Once done, the mappings between both the items will be seen.Mappings_Create_6

Click on ‘Mappings’->Validate and the following information would appear giving the results of the validation.Mappings_Create_SaveResults

Next step is creating a ‘Workflow’ for this as this can’t run independently. Go to Mappings->Generate Workflow as shown below -.                                                           Workflow_Generate

In the ‘Workflow Generation’ window, accept the default and click on ‘Next>’ until the 4th step and then click on ‘Finish’.

Go to Tools->Workflow Manager. Expand the ‘Workflows’ and one can find the newly created ‘wf_m_SQL_Persons_To_FF’. Right-click and click on ‘Open’ as shown below –

Workflow_Start.png

We first need to create the ‘Data Sources’ again this time for ‘AdventureWorks’. Go to ‘Connections’->’Relational’. Click on ‘New’ –

1_RelationalConnection.png

In the ‘Select Subtype’ window, click on ‘Microsoft SQL Server’ and click on ‘OK. New Connection Object Definition opens. Note that this is entirely different from the ODBC source that you had configured while creating it in the Designer. For attributes the following details have to be supplied as highlighted.

2_RelationalConnection.png

Click on ‘OK’ and then with the newly created object ‘AdventureWorks’ selected click on ‘Close’.

Double-click on the session object ‘s_m_SQL_Persons_To_FF’. The ‘Edit Tasks’ window will now appear. Go to ‘Mapping’ tab and click on ‘Connections’. For the SQ type, we need to change the value to the one we just created. So click on the down arrow button as shown below and select on ‘AdventureWorks’ in the ‘Relational Connections Browser’ pop-up.

3_RelationalConnection.png

We now need to configure the Flat File Connection details. For this click on ‘Files, Directories and Commands’.Give the values for ‘Output file directory’ and ‘Output filename’ as shown below

4_RelationalConnection.png

Go to ‘Workflows’->Validate and ensure there are no errors. Click on ‘Ctrl-S’ to save. Again go to ‘Workflows’->Start Workflow. To understand the progress of the Workflow, we need to check in the PowerCenter Monitor. Click on Tools->Workflow Monitor and navigate to the project and the workflow.

You will get a Gantt Chart view of the run indicating the success as seen below –

5_RelationalConnection.png

And that my friends is how we create a ‘Hello World’ equivalent of package in Informatica. Looks complicated isn’t it? Slowly, you will get used to it.

Next post is all about how to generate a dynamic file naming.