Having worked with Google Cloud composer for quite long, when someone says pipeline my brain automatically goes to one and only one thing – DAG. With such wide variety of pluggable options available, it feels like a manna. It is kind of similar to all the other tools of yore Informatica, SSIS etc but less GUI and more extensible. It is only in my current job I am discovering how much more google has to offer out of the box and much cheaper options and I am loving the autonomy that I am provided with to come up with solution.
That’s not to say it’s wild wild west out here with everyone left to do what they want. There are some established patterns as listed below which I have worked on –
Airflow DAGs running dbt models –Very costly. Essentially, what is happening here is all the jobs do is run Kubernetes Pod operator which in turn run dbt commands. Effective if there are lot of ad-hoc jobs getting run across the project. Am planning to put a complete post about it, especially about dbt (which I am REALLY excited about).
Databricks – Mostly for ML models but there are quite good number of legacy jobs doing the ETL workflows. Again, a very pricey option for simple workflows. I loved working on it and seeing the flexibility it provides and ease of debugging.
Airflow DAGs – Regular airflow jobs with all elements of ETL (very few jobs) with plug and play operators.
Cloud Functions – Using Cloud Functions trigger pattern to deploy BQ objects (main meat of this post)
In addition to the above there are some more such as Cloud Run with Cloud Scheduler / Workflows, Vertex AI using dbt. I haven’t really worked on them yet but would be discovering and learning about it.
Cloud Functions as Google states just write your code and let Google handle all the operational infrastructure. With release of version 2, there are literally hundreds of ways with which you can orchestrate the Cloud Functions and integrate with multitude of actions from the entire suite of Google Cloud Platform.
Extracting Schema information from Databricks would seem to be a very simple solution, isn’t it? I mean there is INFORMATION_SCHEMA to use. That is unfortunately only applicable only to Unity Catalog i.e. a databricks metastore that can share data across multiple Azure Databricks workspaces. Unfortunately no such catalog exists in our project. Best recourse for this is by using Python Spark SQL libraries.
Problem
In one of the recent tasks, I had to update a large SQL query comprised of 10+ Databricks tables with additional attributes and replace some existing ones. The only problem is most of the attributes had no aliases to them. So, there was no way to know which attribute is from which table. It’s one of the most irritating and annoying things that bothers me to no end and feel like…
Quite often I see that in many projects that I worked with, there are some disparate data pipelines doing the same tasks but with different operators, different methodology in achieving the same objective. This could be due to different people joining the project not being aware of similar work done before or people trying to come with their own approach. It’s not an ideal way of working and where possible it would be good to have a framework set and have the team implement it with end to end documentation to guide them.
In course of my work, I came across a solution wherein they had implemented a robust solution to achieve this albeit with extra layering i.e. nesting at two levels down which I will explain further. At first, it was very hard for me to absorb what was going on. Now, with some time in my hands and being able to relook at it from fresh perspective I am able to decode it better and appreciate the work done. This is my attempt to make it easier for people to read and implement it for their own use case.
Business Problem
Let’s say we need to perform data loads for a data warehousing project in GCP. The typical flow for a table would be as shown below –
I had an interesting use case as part of my work wherein we needed to generate Nested JSON out of table definitions (i.e. from the INFORMATION_SCHEMA.COLUMNS) that was then used by other system for further processing. Any changes being done on the schema, say adding or removal of column or data type change, we were doing this manually to the JSON file. This process was error prone. As the number of tables to maintain increased, the vulnerability of manually modifying the JSON as and when changes being done at the table level happened.
Looking for simpler solution when I looked around google BigQuery had some ready made operators up to the task namely –
ARRAY_AGG – This returns an ARRAY of expression values specified and can also work with aggregation.
STRUCT – Constructs a container of ordered fields i.e. like a list in python. Returns an ARRAY object.
Let me illustrate the use case with an example. Say we have following dataset (data taken from here)-
I had an interesting business problem to solve and wanted to share on how this can be achieved.
Business Problem
On daily basis a zip file containing various flat files is dropped at a file location. Contents of the flat files are to be read and extracted. All the files have the same metadata.
File names within the zip file are dynamic.
Solution
Alteryx provides out of the box Input tool for working with zip files. All one needs to do is drag and drop the zip file on to the canvas and tool itself will pop-up asking which files to be extracted as shown below.
Pop-up image when dragging a Zip file on to the canvas.Read More »
In the quest for next big break or opportunity out there, I have come to expect these interview questions and I have decided to blog about it.
Phase 1 has ended. Phase 2 is up ahead.
Background – After months of relentlessly applying for jobs with mix and match of skill sets that I am eligible for, going through multiple ghost callings (I believe that’s what it is called when recruiters spend around 20 to 30 minutes inquiring everything about you, explaining the job needs, setting the pay expectations, raising false hopes and then never hearing anything back even after sending out multiple mails or messages), I was fortunate to be set up a call finally with actual person in an actual company.
Unfortunately, the interview didn’t pan out well. In terms of actual interview itself, it was more about getting to know about the current role, about the team I would be working with and then straight down to the actual interview. The first question that I was asked was the ‘Toughest challenge’ till date. It caught me bit off-guard as I was expecting some technical questions to start with before settling down on behavioural and finally ending with the expectations of next meeting. I did my best to explain things I have done but I guess it just was not good enough.
Why really?
This will act as not only reference to myself but also reminder of the other ‘tough challenges’ that lie ahead that I need to face in the future.
Second and foremost reason being writing this down has been very cathartic.
Not EVERYTHING can be put on resume, so here it is then.
Without further ado, let me get to the Toughest Challenge question. Questions like this needs to be addressed via STAR system.
I am inspired to write this blog after seeing a post in my company’s Workplace talking about wellbeing during COVID times and making effective communication. As a working professional there are some ‘mantras’ that I live by and here they go
General
Ownership of a Production issue – If you are tasked with an issue that requires urgent attention for production need, ensure that right from start to finish you own it. This would mean assigning proper timeline to start with, upon delivery ensure it is getting tested in UAT, goes Pre-Prod and then finally getting deployed to Prod. I want your eyes and ears all throughout this process either through regular follow-up’s and ensure post production.
Look out for opportunities – Quite often apart from your regular work, you may come across some tasks that are being done following a set process following certain sequence of steps in a methodical manner. Usually the people doing it do spend considerable amount of time out of their normal routine in accomplishing this (sometimes a week or more!). That should be the first sign of opportunity to seize and automate things.
Commitment to task- Never commit to anything upfront. If anyone is coming to you with a request to deliver something urgently, take some time to pause to first analyse it. Only after proper analysis give a timeline on when it can be done. When giving an estimate consider the time for actual build, Unit Testing, Regression Testing (if needed), Design document update, Peer Review (must), Rework time.
Meetings – Always have an agenda for a meeting and circulate it before hand as it provides context for participants. Be mindful of time and do not book it after-hours just because only person is off-limits. If you feel a phone call would be easier, then by all means give a ring and get it done fast.
Technical Front –
When solutioning a problem ask yourself the following questions –
How critical is this problem?
Are there any more areas where such problem exists? If so what can be done about it?
At whole component level, is there something I can do to make the solution better?
Don’t be afraid to loosen things up and go one step further if you feel you can deliver a more robust and stable solution. Pitch for it if you are confident that you can deliver within the time frame.
Actual Fix –
Provide proper code comments in the code and even in the Fix Details so that the anyone can understand what has gone into it.
Do thorough formatting of the code to make it more readable.
Look for extensibility and scalability of the solution (how does it impact Asset)
Ability for the solution to withstand large volumes (Query Plan analysis, stastics check etc.)
It’s been almost 2 weeks since I passed the certification exam and I wanted to pen down the high level details of all the components that I have studied to pass the AWS Developer Associate exam.
First off, I would like to thank Stephane Maarek and his wonderful Udemy course – Ultimate AWS Certified Developer Associate without which I am not sure I could have even inched past the priliminary set pieces.
Background Like every Software Developer worth their salt my fascination to learn about cloud technologies began few years back. With help of Pluralsight courses, I started off my learning. The course, as usual was of excellent calibre but one tiny teeny detail that wasn’t mentioned was the need to monitor the bill. I was of the opinion 750Hrs of free-tier would last a lifetime.
I drifted off the course for a while and forgot to turn off the EC2 instances and voila! One fine day, in my mailbox I saw a bill of AUD $160. I immediately contacted the support centre and had the account suspended.
It really scared me off for a while and I put off learning about it for quite bit of time.
Motivation – I On and off after that experience I just dabbled with S3 storage and static websites, trying to programatically load some images using Amazon SDK. As part of my Udacity Nano degree experience, I worked on small ETL batch jobs using Python modules by first loading the data on to S3 and then on to Redshift as the final destination. The whole program though left me with a bad taste with one of the worst support system and sub par course quality, though I managed to create some portfolio projects.
As I started off my job search, I realised it’s hard to convince people that I am well acquainted with AWS technologies and I know how to work with them. Though I don’t directly work on it in my current, I am quite aware that the Cloudera offering that we have is deployed across multitude EC2 clusters and we are not using the out-of-the box EMR provided by Amazon.
Additionally, I have been quite often asked if I have certification at least.
Motivation – II When I started searching for the certification offerings from AWS, I realised the one I really want to give is – AWS Certified Data Analytics – Speciality as I aim to become Big Data Engineer/ Developer. That certification explores whole gamut of technologies that one can utilise as part of Data Analytics of Big Data –
The ones highlighted are something that I have worked\working with. AWS mandates that I need to have a Associate certificate before I can attempt an Speciality certificate. I chose ‘Certified Developer – Associate’ out of the three options. Fielding around with friends and colleagues I could see that Udemy course was a strong first followed by ACloudGuru subscription. I took the former. It was an intense 4 week preparation that ultimately bore the results. So, without much further ado here is the recap of all the suite of products that I have learnt
#
Product Name
Description
1
IAM (Identity and Access Management)
Access Management forms the heart and soul of AWS eco system. It has a global view and all the permissions are governed by Policies (written in JSON) format. Governance is accorded in three segments (Users, Groups, Roles)
2
EC2 – Elastic Cloud Compute
EC2 is akin virtual servers on the cloud. AWS provides you whole gamut of choices depending on the 5 distinct characteristics – RAM, CPU, I/O, Network, GPU. Additionally you can have different launch types too – On Demand Instances – short workloads Reserved – Minimum 1 Year Spot Instances – short workloads, less reliable, can be kicked off the instance Dedicated Instances – exclusive access to the hardware and not shared by anyone Dedicated Hosts – Booking of entire physical server, control instance placement etc.
3
ELB – Elastic Load Balancer
Load balancers are servers that forward internet traffic to multiple EC2 servers and essentially spread the load to downstream instances. Three types of Load Balancers are present – Classic Load Balancer Application Load Balancer (v2) Network Load Balancer (v2)
4
ACG – Auto Scaling Group
Purpose of ASG is to Scale Out (EC2) to match increased load or Scale In to match decreased load. Goes hand in hand with ELB’s. Trigger for scaling can be on CPU, Network or even custom metrics. Various types of scaling can be done – step scaling; scheduled scaling etc
5
EBS – Elastic Block Storage Instance Store EFS – Elastic File System
EBS is a network drie you can attach to EC2 instance when they run and retain data in case the instances crash. They are locked to AZ. Depending on need various types of storages are available (from large to small, high latencey to low latency etc). A EBS can be attached to only one EC2 instance
Instance Store unlike EBS is like a USB attached to EC2. Available directly from the machine. On flip side, you will lose all the data if instance crashes
Elastic File System is highly scalable expensive storage that is available across multi-AZ. EFS can be attached to multiple EC2 instances.
6
RDS – Relational Database Store Aurora ElastiCache
Managed database service from AWS stable that provides automated provisioning, continous backup, read replicas, auto-scaling (both vertically and horizontally, os patching) and so on.
Aurora is a serverless Database management from AWS which is akin to AWS RDS on steroids i.e. 5 times more performant.
ElastiCache is similar to EBS i.e. in-memory databases for RDS. It gives ability to cache requests and reduce the hits going to the DB. Remember on the cloud every read/write counts in the cost. Two Types- Redis – Backup and Restore features Memcached – Non-persistent
7
Route 53
A service akin to Traffic Police redirecting road traffic. Redirection can be done at domain level (CNAME), or to another amazon resource (Alias). Various types of routing are available – Simple Multi Value Routing Geolocation Routing Failover Routing Weighted Routing Failover Routing
8
VPC – Virtual Private Cloud
VPC isn’t extensively asked for Developer Associate but high level knowledge should suffice. It’s a private network to deploy resource within which public subnet and private subnet can be set-up
NAT Gateway and Internet Gateways would be used to communicate with www.
9
Amazon S3 – Simple Storage Service
Athena
Major building blocks of AWS. Infinite storage layer to store wide variety of data. Data is stored in buckets (directories). Version controlling can be enable. One of the most interesting things I found is the various storage classes capabilities starting from General Purpose to Glacier Deep Archive (min 180 days storage)
Serverless service to perform analytics direclty against S3 files remotely.
10
CloudFront
Content Delivery Network to improve read performance, DDoS protection etc. Provides Global Edge Networks; great for static content that must be available everywhere
11
ECR – Elastic Container Service Fargate
Container Management service for docker installations. ECS clustoers are logical grouping of EC2 instances
Fargate provides serverless management of container services providing high scalability without manual intervention
12
Elastic Beanstalk
Developer centric view of deploying application on AWS. Has three main components – Application, Application Version, Environment name (dev, test, prod) etc. Provides highly flexible deployment modes – All-At-Once; Rolling; Rolling with Additional Batches; Immutable Can make use of CLI capabilities to manage entirely via code.
13
AWS CICD
DevOps on AWS can be done using these components providing CI/CD CodeCommit – CodePipeline – CodeBuild – CodeDeploy –
14
CloudFormation
Infrastructe as Code. I absolutely LOVE this feature. It’s just mindblowing in every sense. It’s declarative way of outlining AWS infrastructure. Create a template of the infrastructure that you desire. It’s then just a matter of creating and removing infrastructure on click of a button. I will be focusing more on this from now on to enrich my learning
Monitoring – CloudWatch X-Ray CloudTrail
All the applications sends logs to CloudWatch. Alarm can be set for notificaiton in case of unexpected Metrics. X-Ray service provides automate trace analysis and Central Service Map Visualiation. Request tracking across distributed systems Audits API calls made by users/ services/ AWS console. Useful to detect unauthorized calls or root cause of changes
20
AWS Integration & Messages – SQS SNS Kinesis
SQS refers to consumers polling data, data getting deleted after message being read, highly scalable service. SNS refers to messages being pushed to subscribers, up to 10M subscribers, easy integration with SQS for fan-out pattern Kinesis is used for streaming data services where the data gets distributed in mutliple shards. Data is read-only which then provides ability to do multiple analysis.
Once the base Macro is set, we now need another macro which can spit out the sheet names from a given excel file. Perform the following steps to create this macro – 1. Drag in ‘Input Tool’ and connect to any of the existing excel file say ‘Movies.xlsx’. On connection, chose the option – Import only the list of sheet names ( as shown below ). Additionally in the configuration pane of the tool set ‘Output File Name as Field’ to ‘Full Path’
2. Drag in a ‘Formula’ tool and create a new field ‘FullPath’ with the following formula TrimRight([FileName],'<List of Sheet Names>’)+”‘”+[Sheet Names]+”$’” 3. Drag in a ‘Select’ tool and deselect everything but the ‘FullPath’ field. 4. Drag in a ‘Macro Output’. Here is how it should look like –
Run the workflow and ensure full path is being shown –
As my current contract gets a last breath of life and I begin to search for opportunities, this one question has really been doing rounds in my mind. Let me further elaborate on that question before people jump the gun. Does experience in particular technology really matter, if you already have knowledge in related subject matter?
For over a decade since starting my career, I had worked with MSBI suite of technologies with PowerShell, C# thrown in. It wasn’t until I came here to Sydney to work for a different client (LINK Group) where I had the opportunity of working with different sets of tools such as Informatica and Oracle. Things were going good.
As the work came through, I adapted myself to learning the new tools and working with them. It didn’t take time to really upskill. It was then I got a lucky break. Vinay Sammineni of Cognitivo Consulting and his partner Alan Hsiao took upon my CV (from a mutual friend) and saw that I had very good data warehousing and SQL skills.
They phoned me and asked me if I was interested to work for Amaysim who were looking for a data analyst with the skills required being – Alteryx, Tableau, Amazon Redshift and stated further that they have also set-up an interview with them. This was quite a shock and I clearly remember asking them multiple times if they have gone through my CV as I didn’t have any experience in any of them up till that point.
The interview with Jacquie went well and the questions were mostly focused on standard SQL and my past work experience – the challenges that I had faced, dealing with demanding managers etc. She then went on about the tools the company were using and she was quite blasé about me not knowing them and said you shouldn’t find any difficulty getting acclimatised with them.
That was how my new path began to take shape. I honestly can’t thank enough the folks at Amaysim and also Vinay and Alan for believing in me and my abilities to transition my existing knowledge and using it to develop new ones.
As I start applying for jobs for Data Engineering roles, I hope I come across a company who can see my capability to transition my existing breadth of knowledge and not exactly on the tech skills that I work with.