Data Engineer –AWS, Python , PySpark, REST APi’s, Tableau, Splunk
To work for a market leading SaaS provider of technology to the Financial Services industry to be based in London.
Within their industry they use innovative architecture technology to enable their clients to ensure a wider reach of their services in a more effective and efficient way through their extensive use of APi extensibility.
As a Data Engineer you will have a proven track record of building data pipelines to load data into data lakes and data warehouses running in the cloud. (AWS) and have a strong desire to create best in class enterprise data solutions on what is fundamentally a greenfield site as they look to create a brand-new data warehouse in a new territory. You will be heavily involved in migrating all “on prem” across to AWS, be experienced in real time streaming, have excellent Python knowledge and ideally be familiar with using both Tableau and Splunk.
Your ideas are welcome, and they will expect you to bring in new tooling and technology to assist these processes as they continue to grow. As the Data Engineer you will be a significant member of the technology team in both your ability to contribute and produce ideas, as well as your technical nous. It is a small Data team currently and there is a continuous and long roadmap of work to complete as the company is constantly looking at new territories.
• Design, build, monitor and manage large scale batch and streaming data pipelines in AWS cloud environment
• Data Analysis
• Data modelling