Location: Bratislava (hybrid)
Contract: Freelance/FTE
Start: November 2025
Scope of Services:
- Take ownership of infrastructure installation and maintenance: Hadoop ecosystem (HDFS, Hive, Impala, Spark), Apache ecosystem (Kafka, Beam, NiFi, Airflow), Kubernetes clusters, databases, object storage.
- Manage environment provisioning (Conda, Docker, Kubernetes), dependency and environment management, user access, role-based security, networking, and firewall rules.
- Implement platform automation, monitoring, centralized logging, backup and restore.
- Build and maintain CI/CD pipelines for data platforms, cluster health checks, incident response for platform issues.
- Ensure reliable and performant environments for data scientists and ML engineers; support reproducibility, experiment tracking (MLflow), and workflow automation.
- Collaborate closely with Telco client´s team of data scientists and ML engineers.
Requirements for the supplier
- 2–3+ years of hands-on experience managing data and compute infrastructure in a production environment.
- Strong knowledge of Linux systems administration, networking, and security best practices.
- Proficiency with big data tools, containerization/orchestration, and CI/CD pipelines.
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK stack).
- Ability to automate repetitive tasks and manage configurations at scale.
- Solid troubleshooting skills and incident response experience.
- Telecom or large-scale distributed system experience is a plus.
- Experience with cloud environments (GCP, AWS) is a plus.
