Entry-level Azure Data Engineer, Dulles, VA
Azure Data Engineer:
This role requires the design, development and implementation of data solutions to business problems. An Azure Data Engineer will be expected to perform duties such as: evaluating the performance of current data solutions, designing and implementing cloud and hybrid data solutions. Ability to adapt and learn new technologies per business requirements is also needed.
- Design and implement data solutions using industry best practices.
- Performs ETL, ELT operations and administration of data and systems securely and in accordance with enterprise data governance standards.
- Monitor and maintain data pipelines proactively to ensure high service availability.
- Works with Data Scientists and ML Engineers to understand mathematical models and optimize data solutions accordingly.
- Continuous development through training and mentorship programs.
- Create scripts and programs to automate data operations.
You meet our “must haves” for this role if you have:
- Minimum Bachelor’s degree in Data Science, Business Intelligence, Computer Science or related fields, or the equivalent combination of education, professional training, and work experience.
- 0-3 years of experience working with one or more languages commonly used for data operations including SQL, Python, Scala and R.
- Experience working with relational databases such as SQL Server, Oracle and MySQL.
- Experience working with noSQL databases such as Redis, Mongo DB, Cosmos DB.
- Excellent problem-solving skills and ability to learn through scattered resources.
- Thorough understanding of the responsibilities and duties of a data engineer, as well as established industry standards/best practices and documentation guidelines.
- Outstanding communication skills, and the ability to stay self-motivated and work with little or no supervision.
- Authorization(s) to work lawfully in the United States.
Plus, if you meet any the of requirements:
- Experience with cloud based data technologies.
- Experience with distributed systems utilizing tools such as Apache Hadoop, Spark or Kafka.
- Working experience in Agile Scrum environments.
- Experience with source control tools such as Git, SVN and TFS.