Dynatrace Logo

Dynatrace

Senior Machine Learning Engineer

Reposted 2 Days Ago
Be an Early Applicant
Remote or Hybrid
Hiring Remotely in Barcelona, Cataluña
Senior level
Remote or Hybrid
Hiring Remotely in Barcelona, Cataluña
Senior level
The role involves designing and implementing ML services and pipelines, collaborating with teams, and ensuring reliability and efficiency in ML operations.
The summary above was generated by AI

Dynatrace provides software intelligence to simplify cloud complexity and accelerate digital transformation. With automatic and intelligent observability at scale, our all-in-one platform delivers precise answers about the performance and security of applications, the underlying infrastructure, and the experience of all users to enable organizations to innovate faster, collaborate more efficiently, and deliver more value with dramatically less effort. That’s why many of the world’s largest organizations trust Dynatrace to modernize and automate cloud operations, release better software faster, and deliver unrivalled digital experiences.

Dynatrace makes it easy and simple to monitor and run the most complex, hyper-scale multicloud systems. Dynatrace is a full stack and completely automated monitoring solution that can track every user, every transaction, across every application.

The Opportunity:

We’re looking for a Senior Machine Learning Engineer (MLOps) to build and scale production ML services for our Business Insights products. You will be responsible for driving delivery of major projects across both LLM and traditional ML domains, including data pipeline design, model training, deployment, and monitoring, collaborating with Data Science and Software Engineering to uphold standards for reliability, latency, and cost.

Your Tasks:

Engineering and Architecture

  • Design and implement robust data and ML pipelines for training, deployment, and inference at scale, ensuring reliability, performance, and cost efficiency across cloud environments.

  • Deliver production ML services using cloud‑native patterns (e.g., managed services, serverless, container orchestration) optimized for low latency and high throughput.

  • Establish MLOps practices: dataset and model versioning, experiment tracking, promotion gates from development to production, and safe rollback or canary strategies.

  • Build ETL/ELT workflows with clear schema management, data validation, reproducibility, and performance tuning for large‑scale datasets.

  • Implement strategies for scalable inference, including caching, batching, autoscaling, and hardware‑aware optimizations to meet service‑level objectives.

  • Set technical direction for ML service architecture and pipeline design, ensuring scalability and portability across platforms.

Operations, Reliability, and Governance

  • Instrument services with metrics, logs, and traces; maintain dashboards and alerts for latency, throughput, errors, drift, and cost.

  • Run offline and online evaluations for accuracy, drift, stability, and cost; maintain golden datasets and automated promotion gates.

  • Own lifecycle management: training/retraining schedules, deployment procedures, incident playbooks, and post‑incident reviews.

  • Implement robust access controls, secrets management, data governance, and auditability across platforms.

Minimum Requiremnets:

  • Professional Python: 5+ years writing production‑quality code with testing/packaging and ML/DS libraries (MLflow, FastAPI, scikit‑learn, PyTorch or TensorFlow).

  • MLOps: 3+ years with model registries, experiment tracking, promotion gates, and safe deployment strategies.

  • Data engineering: 3+ years building reliable ETL/ELT, schema evolution, data validation, and performance tuning on large‑scale datasets.

  • CI/CD and IaC: 3+ years designing and owning build/test/deploy pipelines, plus infrastructure automation.

  • Containers and orchestration: 3+ years operating ML services on Kubernetes or equivalent.

  • Communication: clear design docs, ability to explain trade‑offs to technical and non‑technical stakeholders.

  • Education: Master’s degree or equivalent practical experience in CS/Engineering/Math or related field.

Preffered Requirements:

  • Experience with SQL‑centric data platforms (e.g., Snowflake) or cloud ML workloads (AWS/GCP/Azure).

  • Observability and monitoring integration (Dynatrace or similar).

  • Workflow orchestration (Prefect, Airflow) and CI tools (Jenkins, GitHub Actions).

  • Streaming and near real‑time patterns (Kafka, Kinesis).

  • Security and privacy: PII handling, audit trails, policy enforcement.

  • Domain: telemetry and observability, time‑series modelling, anomaly detection.

Top Skills

Airflow
AWS
Azure
Fastapi
GCP
Github Actions
Jenkins
Kafka
Kinesis
Kubernetes
Mlflow
Prefect
Python
PyTorch
Scikit-Learn
SQL
TensorFlow

Similar Jobs at Dynatrace

3 Hours Ago
Remote or Hybrid
Barcelona, Cataluña, ESP
Junior
Junior
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Big Data Analytics • Automation
As a Sales Development Representative, you will engage with enterprise clients, drive sales pipeline, adapt strategies, and conduct product demos, focusing on the Italian market.
Top Skills: OutreachSales NavSalesforceSalesloft
Yesterday
Remote or Hybrid
Barcelona, Cataluña, ESP
Senior level
Senior level
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Big Data Analytics • Automation
The role involves mobile development using React Native and Flutter, collaborating with product teams to innovate and implement features while improving code quality and team skills.
Top Skills: FlutterReact Native
Yesterday
Remote or Hybrid
Barcelona, Cataluña, ESP
Senior level
Senior level
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Big Data Analytics • Automation
The Senior AI Researcher will bridge product management and engineering to discover, prototype, and validate AI capabilities in Digital Experience Monitoring, focusing on scalable data processing and AI opportunities.
Top Skills: FlinkGenerative AiJavaKafkaLlmsMl FrameworksOlapSparkTime-Series Analytics

What you need to know about the Austin Tech Scene

Austin has a diverse and thriving tech ecosystem thanks to home-grown companies like Dell and major campuses for IBM, AMD and Apple. The state’s flagship university, the University of Texas at Austin, is known for its engineering school, and the city is known for its annual South by Southwest tech and media conference. Austin’s tech scene spans many verticals, but it’s particularly known for hardware, including semiconductors, as well as AI, biotechnology and cloud computing. And its food and music scene, low taxes and favorable climate has made the city a destination for tech workers from across the country.

Key Facts About Austin Tech

  • Number of Tech Workers: 180,500; 13.7% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Dell, IBM, AMD, Apple, Alphabet
  • Key Industries: Artificial intelligence, hardware, cloud computing, software, healthtech
  • Funding Landscape: $4.5 billion in VC funding in 2024 (Pitchbook)
  • Notable Investors: Live Oak Ventures, Austin Ventures, Hinge Capital, Gigafund, KdT Ventures, Next Coast Ventures, Silverton Partners
  • Research Centers and Universities: University of Texas, Southwestern University, Texas State University, Center for Complex Quantum Systems, Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account