🇬🇧 DataOps Engineer for Leading Energy Solutions Company (f/m/d)
The Company
We share a common set of values that guide our actions and behaviours. It is important for us to treat each other with respect, to honour commitments made and to act with integrity and responsibility.
Mission
We are a leading independent and integrated energy company. We offer our customers innovative energy solutions in the transport, heating and industrial segments.
Company Values
Social Responsibility
Respect
Honour
Integrity
Top 3 Company Advantages
Work-Life-Balance
Team culture
Learning environment
Employees say...
3,5/5 overall rating
“Because it’s a great culture; people are smart and affable. Good Benefits –> long term thinking at Mabanaft: Transition from a logistic and trading company to a research and development country for green energy.”
Short Role Description
Core Tasks
As a DataOps Engineer (m/f/d) you and your engineering colleagues are responsible for the implementation of the data fabric architecture and all associated components. You enable and operate with us a central marketplace for data & analytics products for internal and external customers of the company. What exactly awaits you?
Create end-to-end data products (master data and transactional data products) to enable our company to create our own analytics products (ML models, business domain data marts, etc.) and publish them on our central data marketplace.
Apply modern DataOps principles to your work e.g. CI/CD pipelines, containers, component-based ELT pipelines, IaaS with BICEP/Terraform and automated component building, testing and deployment
Develop components that help reduce the complexity for our users to develop their own analytical products eg data acquisition components
Help our team to enrich the existing Azure toolstack with new relevant tools that allow us to provide an environment where we can implement not only BI solutions but also DataScience/ML solutions.
Key Requirements
Must haves:
Greenfield approach
Good knowledge of SQL and Python
Build data pipelines, a good coder
Further requirements:
Strong practical implementation experience in SQL, Python, data modelling, creation and orchestration of data pipelines, Git, infrastructure-as-a-code, stored procedures, CI/CD pipelines
Ideally experience with the Cloud Azure Tech Stack, e.g. Databricks, Synapse, Azure Data Factory, Logic Apps, Azure SQL DB
Ideally, practical experience in data fabric technologies such as dbt, dataiku, Google Looker, Quest Erwin
Knowledge of the creation and continuous development of ML models is an advantage
Team play, hands-on mentality and an independent and at the same time collaborative way of working ideally characterize you
Compensation
80.-85.000€ p.a.
Benefits
Special benefits, employer-subsidised company pension scheme, group accident insurance also for private use, a working life account with the option of a sabbatical, as well as other social benefits
A modern office building in a central location with an in-house gym and massage room
Various offers in the areas of childcare, holiday camps for children, care for the elderly, care in the event of life crises and more through our cooperation with pme Familienservice
Flexible working time models, flexible mobile working arrangements, subsidised meals in our staff restaurant and bistro, as well as free drinks
Application Details
Starting date: As of now
Working capacity: Full-time
Application documents: CV
Hiring Process
1st Interview with the recruitment team
2nd technical interview with a case study
Get to know the team/offer stage
Working Mode
Dresscode: Casual / Business Casual
Working mode: 80% remote: If candidates come from further away, one week in the office and 3 weeks remote.