Junior DataOps Engineer
About Akuna:
Akuna Capital is a young and booming trading firm with a strong focus on collaboration, cutting-edge technology, data driven solutions and automation. We specialize in providing liquidity as an options market maker – meaning we are committed to providing competitive quotes that we are willing to both buy and sell. To do this successfully we design and implement our own low latency technologies, trading strategies and mathematical models.
Our Founding Partners, including Akuna's CEO Andrew Killion, first conceptualized Akuna in their hometown of Sydney. They opened the firm’s first office in 2011 in the heart of the derivatives industry and the options capital of the world – Chicago. Today, Akuna is proud to operate from additional offices in Sydney, Shanghai, and Boston.
What you’ll do as a Junior DataOps Engineer on the Data Engineering Team at Akuna:
Akuna Capital is seeking talented engineers to take our data platform to the next level. At Akuna, we believe that our data provides a key competitive advantage and is a critical part to the success of our business. Our Data Engineering team is composed of world class talent and the Data Operations team has been entrusted to run our infrastructure and build exceptional management and monitoring tooling. Our data platform extends globally and must support ingestion, processing sand access to complex datasets for a wide range of streaming and batch use cases. To support it we build, deploy and monitor the platform in efficient and highly automated ways, building on the best frameworks and technologies available. In this role you will:
- Work within the Data Ops team, gaining expertise through building highly automated provisioning, monitoring and operational capabilities for data at Akuna
- Support the ongoing growth, design and expansion of our data platform across a wide variety of data sources, enabling large scale support across an array of streaming, near real-time and research workflows
- Operate the data platform to ensure key SLAs are managed across a wide range of producers and consumers
- Build and run essential monitoring infrastructure supporting many of the most important data pipelines at the firm
Qualities that make great candidates:
- Bachelors, Masters or PhD in technical field – Computer Science, Engineering, Physics or equivalent completed upon employment
- 1-3 years of professional experience developing and operating automation, monitoring and management solutions
- Prior hands-on experience with data platforms and technologies such as Kafka, Delta Lake, Spark, Elastic Search
- Previous experience with observability and monitoring tools (Prometheus, Grafana, ELK stack etc.)
- Demonstrated involvement with a leading cloud provider (AWS, Azure, Google Cloud Platform)
- Strong knowledge and usage of containerization and container orchestration technologies, like Docker, Kubernetes and Argo is a plus
- Experience writing and maintaining automation scripts
- Hands-on, collaborative working style
- Legal authorization to work in the U.S. is required on the first day of employment including F-1 students using OPT or STEM
Please note: If you have applied to multiple roles, you will be asked to complete multiple coding challenges and interviews.