We build systems that are real-time, accurate, and anonymous by design. Our systems help today’s largest companies understand how their buildings get used. We have counted hundreds of millions of people.
Counting people in “real-time” is unique and particularly hard to achieve. It allows a user to walk into a room, beneath our sensor, and see the room’s occupancy increment 700ms later.
Today alone, Density will ingest over 1m events. In the coming year, our sensor network is on track to grow ten-fold. The overall system load is exploding. Maintaining our low latency standards requires an increasingly thoughtful system.
We’re architecting infrastructure where annual, unscheduled downtime is measured in minutes. We’re building intelligent redundancies so missed events are an oddity. We’re constructing an exceptional engineering team to support always-on, intelligible analytics generated on the fly.
This role reports to our Director of Data Science & Engineering. This is a hands on technical role with a management track.
As the Data Engineering Team Lead at Density, you will help build out and lead a team of Data Engineers to provide ETL pipelines and infrastructure for core product Data Science capabilities at a fast paced startup in growth mode.
In this role you will:
- Define and collaborate on the big data IoT tech stack, using Apache Spark, Kafka, Python, and AWS.
- Make technology choices and to fill in the gaps for tool-chains and workflows.
- Work with software developers and data scientists to establish and implement good data engineering practices and processes.
- Work with hardware engineers to help interface data from cloud to hardware devices.
- Work with team members to keep projects on track and stakeholders informed.
The ideal candidate will have:
- 5+ years experience as a data engineerPython.
- Extensive knowledge of Python programing for data engineering.
- ETL - Experience implementing and monitoring production data pipelines.
- SQL - A working knowledge of SQLApache Spark (PySpark).
- An intermediate ability to write and debug Spark jobs Apache Airflow.
- Experience configuring and operating Apache Airflow for job executionApache Kafka.
- Admin and monitoring of high available Kafka installations on AWS.
- Linux command line - Working knowledge of linux command line and security.
- AWS cloud experience - EC2, S3, and IAM configuration and automation.
- Jira - Experience organizing projects in Jira and executing priorities to a team of data engineers.
- You have an awareness of your weak spots and a genuine desire to improve.
- You’re looking for a long-term role with a company that has long-term ambition.
- You can balance a demanding workload, discern priorities, and communicate tradeoffs effectively.
Bonus points if you:
- Experience with Python remote kernels (with Spyder or PyCharm).
- Knowledge of big data real-time streaming tools like Kafka Stream or Flink.
- Experience with image processing or complex sensor dataFamiliarity with C++
What we bring:
- A team hailing from places like Apple, Meraki, HashiCorp, Stanford, NASA, and beyond.
- $100M from investors such as Kleiner Perkins, Founders Fund, and Upfront Ventures.
- A work environment full of fun, smart, dedicated, and kind teammates.
- Our principles - Be Humble, Seek Feedback, and Solve the Fundamental Problem.
- Excellent health benefits including medical, dental, vision, mental, reproductive, and active. Mandatory PTO and more.