Remote Developer Jobs

The fastest and lightest remote job board for developers

Data Platform Engineer at 7Bridges

The Why

We are looking for people that are curious to see first hand what the future of logistics looks like, and we want you to take a first-row seat role in shaping it with us.
A day working at 7Bridges is no ordinary day, as you will be surrounded by a friendly, inquisitive and creative bunch of colleagues, that will always value the unique point of view you bring, while helping you to be proud of the day-to-day concrete impact of your work.


The Role

Data Engineering lies at the core of the game-changing platform that 7Bridges is rolling out across the logistics industry. The Data Engineer role lies within our engineering team, and operates in close collaboration with our data science team. The main focus will be to incrementally design and implement a logistic data platform upon which all existing and future 7Bridges products will tap into.


Job responsibilities

- Devise innovative ways as well as continuous incremental improvements on our data pipelines, to distil the noisy data we currently ingest, into a coherent and representative view of logistics that will lie at the core of the 7Bridges platform.
- Help find, validate and ingest new data sources, to enable 7Bridges to provide more accurate decision-making models, and ultimately make logistics easy and efficient for all our customers.
- Closely collaborate with our Data Scientists and Core Engineers on the design of data pipelines and optimisation models, as well as the ongoing definition and refinement of our data domain.


Skills

- You should be comfortable working in a team or on your own and have curiosity and eagerness to learn new things and to communicate and share them with others.
- A general drive to raise the bar and lead your peers towards what you believe is a better way of doing things.
- Fluency with Python and SQL - general knowledge of functional programming concepts and languages (Scala/Haskell etc) is beneficial.
- In-depth experience with design and implementation of relational and non-relational data models as well as ETL data pipelines, preferentially in a cloud environment such as AWS, GCP etc.
- Experience with distributed data processing concepts and frameworks, such as Apache Spark, Apache Flink etc. as well as data processing workflow managers such as Apache Airflow, dbt etc.
- Experience in building data models and workflows supporting Machine Learning models at scale is advantageous.
- Some knowledge of basic Machine Learning models and techniques is beneficial.