Who is the Company

A global supplier of equipment and services used by the semiconductor industry.

The Challenge

The company manages huge data sets (big data) utilized by various stakeholders to derive valuable insights. They use Azure Synapse Analytics, which provides data integration, management, and extract-transfer-load (ETL) support. A massive amount of data is continuously processed through its data pipelines. The processed data is used to build Microsoft Power BI reports, helping the company make critical business decisions. The end-to-end data flow needed to be processed promptly, as any delays directly affected top-level business management.

The immense data load, extending from source to destination, placed considerable strain on the Synapse Analytics data pipelines and data warehouse tables. The system comprises roughly 250 pipelines and over 400 objects, including tables and views. As the data flow continued to grow, the company's development and support teams faced an increasing number of issues escalated by top management. Monitoring and maintaining the expanding data flow became increasingly challenging.

To address the rising number of escalations, the support team resorted to manually executing SQL scripts every few hours, which amounted to a lot of dedicated time and effort. The manual data flow monitoring process presented several problems that led to inefficiencies and inaccuracies.


Manual monitoring occurred irregularly, which increased the chances of missing critical failures since the alerts were only configured for master pipeline failures. This was a problem because there were cases where only child pipelines failed and not the master.

Finally, tracking result monitoring was inconsistent. The access to scan the Confluence system was only with the L1 team, giving the other support teams limited visibility. This became a cause of even greater inaccuracies.

To summarize, the company was looking for the following:

  • Effective and efficient monitoring: The current monitoring regime was error-prone and involved considerable time and effort.
  • Increased abnormality visibility: The manual monitoring was limited to the L1 team, resulting in low visibility for the entire team. The company aimed at ensuring higher abnormality visibility across all team members.
  • A proactive approach: The company aimed at transforming its reactive approach to a proactive one by detecting and resolving data load issues before they escalated, minimizing potential disruptions.

The Solution

The company’s existing setup included Azure Synapse Analytics components, data pipelines, linked services, and API connections. Our Managed Services team ensured that the company didn’t incur extra costs in buying additional tools and leveraged these existing tools to build an automated solution that effectively monitors the pipeline and objects.

The team has integrated SQL monitoring scripts into the Synapse Analytics pipeline activity, enabling them to run on the internal Synapse data warehouse. The results are then fed into Azure Logic Apps, which triggers an HTML email notification sent to the designated distribution lists every two hours. By implementing an automated monitoring pipeline, the GSPANN team drastically reduced the efforts spent on manual monitoring.

Our engineers implemented automated monitoring in such a way that even the child pipeline failures were captured in the scheduled emails. This improvement also helped the team in knowing if any pipeline triggers were stopped.


Key aspects of our solution included:

  • Identifying child pipeline failures: Child pipeline failures were previously going unnoticed.
  • Early detection of issues: Pipelines and objects are now closely monitored every two hours. Any problems detected by the new system are handled directly within the two-hour window.
  • Timely refreshing Power BI reports: The new system ensures that critical Power BI reports are completed on time despite massive data loads.
  • Reusing the existing infrastructure: Our team decided to use the additional capabilities of the existing Synapse Analytics platform, saving the company costs it would have incurred in the procurement of additional infrastructure.

Business Impact

  • Enhanced data confidence: The implementation of the new system ensures timely delivery of accurate data, which in turn boosts business users' confidence. Consequently, they can make better-informed decisions, leading to improved company profitability.
  • Efficient resource utilization: The L1 team is no longer burdened with conducting manual monitoring queries every two hours, allowing them to focus on their primary responsibilities. This shift in focus results in more effective support for the entire company.
  • Improved employee satisfaction: The L1 team no longer needs to work weekends to ensure data continuity, which saves the company money and leads to greater employee satisfaction. Also, because the L1 team can now devote full time to their primary support responsibilities, there is more satisfaction in the company overall when regular day-to-day support is required. Happy employees are highly productive, which has a positive ripple effect company-wide.

Technologies Used

Microsoft Azure
Azure Synapse Analytics
Azure Logic Apps

Related Capabilities

Utilize Actionable Insights from Multiple Data Hubs to Gain More Customers and Boost Sales

Unlock the power of the data insights buried deep within your diverse systems across the organization. We empower businesses to effectively collect, beautifully visualize, critically analyze, and intelligently interpret data to support organizational goals. Our team ensures good returns on the big data technology investments with the effective use of the latest data and analytics tools.

Do you have a similar project in mind?

Enter your email address to start the conversation