Genesys is building the data platform of the future with a small team with a startup feel and the financial stability of an industry leader. We’re experiencing hyper growth and scalability is a key focus with hundreds of terabytes of data our customers generate. Our Analytics team is exposed to a large variety of technology with a chance to move around and work on something new practically every day.
The Genesys Cloud Analytics platform is the foundation on which decisions are made that directly impact our customer’s experience as well as their customers’ experiences. We are a data-driven company, handling tens of millions of events per day to answer questions for both our customers and the business. From new features to enable other development teams, to measuring performance across our customer-base, to offering insights directly to our end-users, we use our terabytes of data to move customer experience forward.
In this role, you’ll partner with software engineers, product managers, and data scientists to build and support a variety of analytical big data products. The best person will have a strong engineering background, not shy from the unknown, and will be able to articulate vague requirements into something real. We are a team whose focus is to operationalize big data products and curate high-value datasets for the wider organization as well as to build tools and services to expand the scope of and improve the reliability of the data platform as our usage continues to grow on a daily basis.
Develop and deploy highly-available, fault-tolerant software that will help drive improvements towards the features, reliability, performance, and efficiency of the Genesys Cloud Analytics platform.
Actively review code, mentor, and provide peer feedback.
Collaborate with engineering teams to identify and resolve pain points and proselytize best practices.
Partner with various teams to transform concepts into requirements and requirements into services and tools.
Engineer efficient, adaptable and scalable architecture for all stages of data lifecycle (ingest, streaming, structured and unstructured storage, search, aggregation) in support of a variety of data applications.
Build abstractions and re-usable developer tooling to allow other engineers to quickly build streaming/batch self-service pipelines.
Build, deploy, maintain, and automate large global deployments in AWS.
Troubleshoot production issues and come up with solutions as required.
This may be the perfect job for you if:
You have a strong engineering background with ability to design software systems from the ground up.
You have expertise in Java. Python and other object-oriented languages are a plus.
You have experience in web-scale data and large-scale distributed systems, ideally on cloud infrastructure.
You have a product mindset. You are energized by building things that will be heavily used.
You have engineered scalable software using big data technologies (e.g., Hadoop, Spark, Hive, Presto, Elasticsearch, etc).
You have experience building data pipelines (real-time or batch) on large complex datasets.
You have worked on and understand messaging/queueing/stream processing systems.
You design not just with a mind for solving a problem, but also with maintainability, testability, monitorability, and automation as top concerns.
Technologies we use and practices we hold dear:
Right tool for the right job over we-always-did-it-this-way.
We pick the language and frameworks best suited for specific problems.
Packer and Ansible for immutable machine images.
AWS for cloud infrastructure.
Automation for everything. CI/CD, testing, scaling, healing, etc.
Hadoop, Hive, and Spark for batch.
Airflow for orchestration.
Dynamo, Elasticsearch, Presto, and S3 for query and storage.