At BlueLabs we started out last year with the vision of building a next-generation sports betting platform focused on performance, reliability, modularity and automation. After a period of experimentation, we are now excited to see our technology powering the launch of a new B2C operator in Ghana in early 2021.
To ensure the continuous enhancement of our platform while scaling up operations and entering additional African countries, we are now looking into growing our team. As a result, our Data Team is on the lookout for a versatile Data Engineer. With our Data Platform still in its early stages, you’ll have an opportunity to make a huge impact, work on a variety of interesting data-related challenges and help us shape the future of our platform.
The mission of Data Team is to provide an ecosystem where data is transmitted, stored, processed, served, and analyzed in a fast, stable, reliable, and secure way. As a team, we have the expertise to drive data-related initiatives and make an impact in various parts of the organization.
Our goal is to provide a cutting-edge Data Platform which can be leveraged to explore data, discover insights, build predictive models, and collaborate with other teams on enhancing services. We strive for innovation and promote data-driven solutions which, we believe, will make our products more compelling and give us a competitive advantage on the market. We’re currently building a strong foundation for our platform - well-modelled Data Lake/Warehouse, real-time data ingestion pipelines with scalability in mind, sophisticated observability, and more. This will power our Business Intelligence, Machine Learning and Data Science initiatives in the future.
As a Data Engineer in our team, you’ll take part in the whole development lifecycle of a product. This involves identifying problems, designing solutions, implementing them, performing code reviews and maintaining services in the production environment. Careful modeling of the data storage layer, ensuring reliable and swift message transfer, building high-performance data pipelines, supporting Analytics and Data Science flows, are just some of the things you’ll face at your day-to-day work. We’re looking for a data generalist who is not afraid of the diverse challenges we will face while building the platform and truly enjoys working with data.
A good candidate should have high standards for himself, a desire to build high-quality, well-tested, production-ready solutions and constantly improve his/her skills. We expect you to take ownership of some parts of the platform, be proactive over the entire development lifecycle and have the ability to work in a fast-paced environment. If this sounds scary, don’t worry - you won’t be alone in this. We value teamwork, trust, communication and a healthy working relationship, so you can always count on the team for support.
- You have good problem-solving skills, a tendency towards simple and effective solutions, and a “getting things done” mentality.
- Analytical thinking, troubleshooting skills, attention to detail.
- You are a reliable, trustworthy person that keeps their promises.
- Interest in keeping yourself up to date and learning new technologies.
- Product-oriented mindset and eagerness to take part in shaping the products we build.
- Ability to work autonomously in a fully distributed team.
- Good communication skills in verbal and written English.
We are hiring for talent, not for a specific location. You will find that members of our team are distributed all over Europe. Being a distributed team enables us to hire only the best, without being restricted to the talent pool available at a specific geographic location. However, to facilitate team communication and collaboration we currently require you to be located in a European time zone (between UTC-1 and UTC+3). You must also be able to travel to other European locations a few times a year for on-site meetings and workshops.
The closing date for applications is on November 13th, 2020 and we would want you to start in your new role with us in January 2021. Please note that during the application period we are not yet reviewing submissions in order to ensure equal chances for all applicants independent of the timing of their application.
The budgeted compensation range for this role is 58,000€ to 64,000€ annually, depending on your background and experience. As an independent contractor, you will be responsible for paying any taxes or applicable fees in your country of residence. In addition to that, we offer a number of perks to each of our team members as we truly believe in a healthy work-life balance and continuous learning.
- BS degree in Computer Science or similar technical field.
- 2+ years of software engineering experience.
- Solid understanding of modern back-end systems, microservice architecture, message-driven solutions, distributed processing, and replication.
- 1+ years of experience with relational databases (Postgres, MariaDB, Oracle).
- Background in building data processing pipelines.
- Understanding of data streaming concepts and technologies such as Pulsar, Kafka, RabbitMQ, or similar.
- Familiarity with Agile methodology, test-driven development, containerization, continuous integration/deployment, cloud environments, and monitoring.
- Ability to write clean, efficient, maintainable, and well-tested code; Golang/Python skills are a plus.
- Experience with Data Warehouses/Lakes (Big Query, RedShift, Snowflake), NoSQL (Cassandra, Redis), Stream Processing Engines (Dataflow, Flink), Workflow Management Tools (Airflow, Pachyderm), or other Big Data solutions are highly appreciated.