Our client a fast growing Fintech Analytics company is scaling and seeking a data engineer who will be responsible for creating, maintaining and understanding data and the resulting delivery infrastructure. Responsibilities: Contribute to the design and development of our Python data workflow management platform Design and develop tools to wrangle datasets of small and large volumes of data into cleaned, normalized, and enriched datasets Build and enhance a large, scalable Big Data platform (Spark, Hadoop) Refine processes for normalization and performance-tuning analytics Skills: You love building elegant solutions that scale You bring deep experience in the architecture and development of quality backend production systems, specifically in Python You love working on high-performing teams, collaborating with team members, and improving our ability to deliver delightful experiences to our clients You are excited by the opportunity to solve challenging technical problems, and you find learning about data fascinating You understand Server, Network, and Hosting Environments, RESTful and other common APIs, common data distribution, and hosted storage solutions Must Have: 5+ years of full-time experience in a professional environment Expertise in Python Experience with ETL and/or other big data processes Experience with at least 2 popular big data / distributed computing frameworks, eg. Spark, Hive, Kafka, Map Reduce, Flink Experience working independently, or with minimal guidance Strong problem solving and troubleshooting skills Ability to exercise judgment to make sound decisions Proficiency in multiple programming languages Strong communications skills, interpersonal skills, and a sense of humor Associated topics: algorithm, application, c c++, developer, devops, java, php, sdet, software developer, software engineer
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.