
DevOps Engineer, Data & Analytics
Upwork
Remoto
•10 hours ago
•No application
About
We’re looking for a DevOps engineer who can architect and maintain the systems that power large-scale data ingestion, graph-backed analytics, and real-time strategy forecasting. You’ll be responsible for ensuring our pipelines, APIs, and cloud services are secure, scalable, and efficient—supporting everything from multi-source data polling to attribution modeling and ROI reporting. This role requires someone comfortable operating at the intersection of distributed systems, automation, and applied machine intelligence. Key Responsibilities - Build and maintain cloud-native infrastructure (AWS) to support data ingestion pipelines, analytics, and forecasting engines. - Own CI/CD pipelines for backend services (FastAPI, Python) and frontend (Vercel/Next.js), ensuring reliable deployments and rapid iteration. - Optimize data workflows (Airflow pipelines, Neo4j graph operations, caching layers) for speed, reliability, and cost efficiency. - Implement robust security controls, including key management, tenant isolation, short-TTL authentication, and SOC2-ready audit trails. - Develop and refine monitoring and observability frameworks (logging, metrics, dashboards, incident runbooks). - Partner with data science and product teams to support metric validation, forecasting models, and closed-loop attribution logic. - Manage cost controls and scaling strategies for high-frequency API usage across multiple providers. Qualifications - Proven experience in DevOps or platform engineering, preferably in data-intensive or AI/analytics-driven environments. - Strong knowledge of AWS services (ECS/EKS, Lambda, Secrets Manager, S3, CloudWatch, IAM). - Hands-on with workflow orchestration and pipelines (Airflow or similar) and graph databases (Neo4j or equivalent). - Proficiency with infrastructure-as-code (Terraform, CloudFormation) and containerization (Docker/Kubernetes). - Solid grounding in security best practices (scoped access, secret rotation, audit logging). - Familiarity with observability stacks (Prometheus, Grafana, Datadog, ELK). - Comfortable collaborating with engineers, data scientists, and product stakeholders to align infrastructure with evolving business needs.