Data Engineer Intern
Data Engineer
We are looking for a Data Engineer Intern to be part of our Applications Engineering team. This person will design, develop, maintain and support our Enterprise Data Warehouse & BI platform within Tesla using various data & BI tools, this position offers unique opportunity to make significant impact to the entire organization in developing data tools and driving data driven culture.
Responsibilities
· Work in a time constrained environment to analyze, design, develop and deliver Enterprise Data Warehouse solutions for Tesla's Sales, Delivery and Logistics Teams
· Create ETL pipelines using Python, Airflow
· Create real time data streaming and processing using Open source technologies like Kafka, Spark etc
· Work on creating data pipelines to maintain Datalake in AWS or Azure Cloud
· Work with systems that handle sensitive data with strict SOX controls and change management processes
· Develop collaborative relationships with key business sponsors and IT resources for the efficient resolution of work requests.
· Provide timely and accurate estimates for newly proposed functionality enhancements critical situation
· Communicate technical and business topics, as appropriate, in a 360 degree fashion, when required; communicate using written, verbal and/or presentation materials as necessary.
· Develop, enforce, and recommend enhancements to Applications in the area of standards, methodologies, compliance, and quality assurance practices; participate in design and code walkthroughs.
· Utilize technical and domain knowledge to develop and implement effective solutions; provide hands on mentoring to team members through all phases of the Systems Development Life Cycle (SDLC) using Agile practices.
Requirements
· Requires a bachelor's degree in computer science or equivalent discipline
· Experience in Data Modelling
· Strong experience in Data Warehouse ETL design and development, methodologies, tools, processes and best practices
· Experience in Big Data processing using Apache Hadoop/Spark ecosystem applications like Hadoop, Hive, Spark, Kafka and HDFS preferable
· Knowledge of Machine learning is preferred.
· Development experience in Open Source technologies like Python, Java
· Excellent query writing skill and communication skills
· Familiarity with common API's: REST, SOAP