🌍 Location: We are full-remote and globally distributed! Our current team is distributed between GMT-8 and GMT+2 so we currently only hire in these timezones.
🖥️ Team: Ingestion
💼 Team Lead: Paweł Ledwoń
💰 Compensation: Please check our compensation calculator.
🦔 Read more about how we hire and how we think about Diversity & Inclusion.
We're shipping every product that companies need to run their business from their first day, to the day they IPO, and beyond. The operating system for folks who build software.
We started with open-source product analytics, launched out of Y Combinator's W20 cohort. We've since shipped more than a dozen products, including:
-A built-in data warehouse, so users can query product and customer data together using custom SQL insights.
A customer data platform, so they can send their data wherever they need with ease.
Max AI, an AI-powered analyst that answers product questions, helps users find useful session recordings, and writes custom SQL queries.
Next on the roadmap are CRM, messaging, revenue analytics, and support products. When we say every product that companies need to run their business, we really mean it!
We are:
-Product-led. More than 100,000 companies have installed PostHog, mostly driven by word-of-mouth. We have intensely strong product-market fit.
Default alive. Revenue is growing 10% MoM on average, and we're very efficient. We raise money to push ambition and grow faster, not to keep the lights on.
Well-funded. We've raised more than $100m from some of the world's top investors. We're set up for a long, ambitious journey.
We're focused on building an awesome product for end users, hiring exceptional teammates, shipping fast, and being as weird as possible.
Things we care about-Transparency: Everyone can read about our roadmap, how we pay (or even let go of) people, our strategy, and how we work, in our public company handbook. Internally, we share revenue, notes and slides from board meetings, and fundraising plans, so everyone has the context they need to make good decisions.
Autonomy: We don’t tell anyone what to do. Everyone chooses what to work on next based on what's going to have the biggest impact on our customers, and what they find interesting and motivating to work on. Engineers lead product teams and make product decisions. Teams are flexible and easy to change when needed.
Shipping fast: Why not now? We want to build a lot of products; we can't do that shipping at a normal pace. We've built the company around small teams – autonomous, highly-efficient groups of cracked engineers who can outship much larger companies because they own their products end-to-end.
Time for building: Nothing gets shipped in a meeting. We're a natively remote company. We default to async communication – PRs > Issues > Slack. Tuesdays and Thursdays are meeting-free days, and we prioritize heads down building time over perfect coordination. This will be the most productive job you've ever had.
Ambition: We want to solve big problems. We strongly believe that aiming for the best possible upside, and sometimes missing, is better than never trying. We're optimistic about what's possible and our ability to get there.
Being weird: Weird means redesigning an already world-class website for the 5th time. It means shipping literally every product that relates to customer data. It means building an objectively unnecessary developer toy with dubious shareholder value. Doing weird stuff is a competitive advantage. And it's fun.
We're seeking an ingestion pipeline engineer who:
-Thrives on challenges of building systems that process billions of events per day
Gets excited about designing elegant and efficient systems that can handle terabytes of data without giving people insomnia
Understands the importance of data integrity and reliability for customers
The ideal candidate has experience with high-throughput data processing systems such as:
Analytics platforms
Metric collection systems
Log aggregation engines
Streaming and batch-processing pipelines
We use a mixture of Node.JS and Rust for high-throughput processing. We store most of our data in Kafka, PostgreSQL, Clickhouse, S3, and Redis, but with the growing volume of data, we're constantly re-evaluating our technological choices. We're looking for someone who understands the principles of designing distributed systems and can use them to pick the best tools for the job.
At PostHog you won't get stuck maintaining an obscure microservice or working in the shadows of the product org, instead, you will:
-Own the entire service from end-to-end: No committees or overzealous PMs, the destiny of the ingestion pipeline will be in your hands.
Build open-source software: You'll be able to show your Rust-fu to your friends and family (and security researchers too).
Build in the hot path: Your code will decide whether our customers and engineers have a good time or not.
Start from first principles: No cookie-cutter solutions here, you'll be safe from AI agents for a good while.
See immediate results: Small, confident, frequent steps forward – that's how we like to move.
Our team is spread across North America and Europe and we're looking for another engineer in Europe or East Coast US.
We're growing very quickly at PostHog, so quickly that the numbers in our job descriptions often get out of date. Our ingestion pipeline is currently processing 10s of billions of events a month and we're hoping to add one more zero to that soon. You’ll be responsible for developing the infrastructure to capture all that data, process it reliably, and provide it to other parts of PostHog's platform, such as product analytics, feature flags, CDP, and more.
Experience working with highly scalable, event-driven distributed systems
You have developed multi-tenant software-as-a-service products
Experience with Node.JS, Go, Rust, or similar languages
You have worked with PostgreSQL, Kafka, Redis, or similar systems at scale
You know how to ship changes quickly without breaking things
Experience with customer data platforms or similar data analytics systems
You've carried a pager and have dealt with incidents
You're comfortable with provisioning and maintaining cloud infrastructure
Experience with benchmarking and profiling tools
Knowledge of observability systems and practices
We believe people from diverse backgrounds, with different identities and experiences, make our product and our company better. That’s why we dedicated a page in our handbook to diversity and inclusion. No matter your background, we'd love to hear from you! Alignment with our values is just as important as experience! 🙏
Also, if you have a disability, please let us know if there's any way we can make the interview process better for you - we're happy to accommodate!
#LI-DNI