CIO Influence
CIO Influence News Cloud

Datadog Launches New Product to Observe, Troubleshoot and Optimize Data Processing Jobs

Datadog Launches New Product to Observe, Troubleshoot and Optimize Data Processing Jobs

Data Jobs Monitoring detects and helps resolve job failures and latency spikes across data pipelines

Datadog, the monitoring and security platform for cloud applications, announced the general availability of Data Jobs Monitoring, a new product that helps data platform teams and data engineers detect problematic Spark and Databricks jobs anywhere in their data pipelines, remediate failed and long-running-jobs faster, and optimize overprovisioned compute resources to reduce costs.

Also Read: Importance of Data Protection in Cybersecurity

Data Jobs Monitoring immediately surfaces specific jobs that need optimization and reliability improvements while enabling teams to drill down into job execution traces so that they can correlate their job telemetry to their cloud infrastructure for fast debugging.

“Data Jobs Monitoring enables my organization to centralize our data workloads in a single place—with the rest of our applications and infrastructure—which has dramatically improved our confidence in the platform we are scaling,” said Matt Camilli, Head of Engineering at Rhythm Energy. “As a result, my team is able to resolve our Databricks job failures 20% faster because of how easy it is to set up real-time alerting and find the root cause of the failing job.”

“When data pipelines fail, data quality is impacted, which can hurt stakeholder trust and slow down decision making. Long-running jobs can lead to spikes in cost, making it critical for teams to understand how to provision the optimal resources,” said Michael Whetten, VP of Product at Datadog. “Data Jobs Monitoring helps teams do just that by giving data platform engineers full visibility into their largest, most expensive jobs to help them improve data quality, optimize their pipelines and prioritize cost savings.”

CIO Influence Latest News:Crowdstrike Falcon for Insurability Fast Tracks Companies for Cyber Insurance Eligibility

Data Jobs Monitoring helps teams to:

  • Detect job failures and latency spikes: Out-of-the-box alerts immediately notify teams when jobs have failed or are running beyond automatically detected baselines so that they can be addressed before there are negative impacts to the end user experience. Recommended filters surface the most important issues that are impacting job and cluster health, so that they can be prioritized.
  • Pinpoint and resolve erroneous jobs faster: Detailed trace views show teams exactly where a job failed in its execution flow so they have the full context for faster troubleshooting. Multiple job runs can be compared to one another to expedite root cause analysis and identify trends and changes in run duration, Spark performance metrics, cluster utilization and configuration.
  • Identify opportunities for cost savings: Resource utilization and Spark application metrics help teams identify ways to lower compute costs for overprovisioned clusters and optimize inefficient job runs.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

Stellar Cyber Open XDR Platform Debuts “Universal EDR” to Optimize Data From Any EDR for Enhanced Speed and Precision in Detecting Attacks

CIO Influence News Desk

SEALSQ and Inventec Appliances Showcase Thermostat IoT Devices with Matter Security at Computex

GlobeNewswire

Online Auction, Cyber Risk and Data Workflow Platforms Win ISG Startup Challenges

Business Wire