Browse all jobs
    adsquare

    (Senior) Data Engineer (m/f/d)

    adsquare

    Berlin2 days ago
    Data & AI
    Data Engineering
    Senior
    Hybrid

    Intro At Adsquare, our mission is driven by our core focus:

    • Passion – Solving complex challenges with great people, tech, and data.

    • Niche – Location Intelligence for Programmatic Advertisers.

    Our core values are integral to everything we do:

    • Drive: We turn ambition into action to deliver valuable outcomes.

    • Resilience: We adapt, persevere, and grow stronger.

    • No BS: We value honesty, transparency, and clear communication.

    • Humble: We choose modesty over vanity and let results speak for themselves.

    • Moral Compass: We do the right thing with fairness, integrity, and respect.

    We seek candidates who not only bring top-tier technical expertise but also embody these values in every aspect of their work.

    ]]>
    About the team As a (Senior) Data Engineer at Adsquare, you will be a key contributor to our core engineering function, creating and maintaining scalable big data pipelines that power our applications and drive business value.

    Because our engineering department handles a variety of critical data challenges, you will be assigned to a specific cross-functional squad based on your individual strengths, experience, and current business needs. To give you an idea of the work, your daily mission might involve:

    • Data Ingestion & Products: Developing data solutions built on massive volumes of location signals, geospatial (places) data, and audience attribute data.

    • Data Integrations & Egress: Architecting privacy-first, massive-scale data egress solutions to ensure our datasets reach external partners reliably, securely, and efficiently.

    Regardless of the specific squad, you will work alongside a talented team of Data and Backend Engineers under the guidance of a Technical Team Lead, operating with a high degree of autonomy and a strong software engineering mindset.

    ]]>
    Your Mission
  1. Data Pipeline Ownership: Take full accountability for the pipeline lifecycle—from raw data ingestion to transformation and external delivery—according to defined SLAs, time, and budget.

  2. Architect Scalable Solutions: Design and build robust data architectures required to process and transfer terabytes of data.

  3. Pipeline Optimization: Continuously improve data pipelines for cost and performance. This includes analyzing query plans, optimizing compute and working memory, and strategically applying horizontal or vertical scaling.

  4. Engineering Rigor: Elevate data engineering standards. Implement CI/CD workflows, infrastructure-as-code, test-driven development (TDD), and automated testing to ensure reliable and maintainable code.

  5. Data Monitoring: Create and maintain live monitoring dashboards to ensure data solutions are healthy and to support strategic decision-making.

  6. Collaboration & Mentorship: Bridge the gap between Data and Backend engineering. For Senior applicants, act as a technical leader by mentoring junior team members, conducting code reviews, and introducing best practices.

  7. ]]>
    Your Profile We are looking for a candidate with varying levels of experience (mid-level to senior, typically 3-6+ years) in Data Engineering or Backend Development with a heavy data focus. You must be comfortable working in a self-organized, agile environment.

    Must-Have Technical Skills:

    • Programming Mastery: Very strong proficiency in Python and SQL. You write modular, production-ready code and possess a solid understanding of both Functional Programming and Object-Oriented Programming (OOP) principles.

    • Big Data & PySpark: Deep experience with large-scale data processing frameworks, specifically Apache Spark / PySpark. You understand how to handle TB-scale datasets efficiently. Deep understanding of big data file formats like parquet and avro. Experience with open Lakehouse formats like Iceberg.

    • Advanced Optimization Skills: Proven experience in optimizing data pipelines for compute, working memory, and cost efficiency, including reading and analyzing complex query plans/profiles.

    • Database & Storage Architecture: Expertise in the trade-offs between OLAP and OLTP systems. You have built solutions using relational and non-relational (NoSQL) databases, and horizontally scalable data warehouses/lakehouses (e.g., Redshift, Snowflake, StarRocks).

    • Cloud Native (AWS): Experience architecting solutions within the AWS ecosystem (e.g., S3, Athena, Glue, EMR, Lambda, Batch).

    • Infrastructure & Orchestration: Production experience treating infrastructure as software using Terraform, alongside experience with orchestration tools like Airflow, dbt, or Step Functions.

    • Engineering Fundamentals: Solid grasp of computer science principles, data structures, algorithms, and git-flow/CI/CD pipelines.

    • AI tools: Good command of using AI tools (e.g. Claude Code, Kiro, Gemini Pro) to improve and refactor your code, increase your productivity and quality and performance of your code.

    ]]>
    Nice to Have
  8. Compiled Languages: Experience with a compiled or strongly typed language (e.g., Java, Scala, Go, Kotlin, C++, Cython).

  9. Geospatial Data: Experience working with GIS (Geographic Information Systems) and geo-spatial datasets.

  10. Data Formats: Expertise in optimizing file formats (Parquet, Avro, Iceberg) for performance.

  11. Streaming Technologies: Familiarity with Kafka and Flink.

  12. Backend Context: Experience working closely with Backend engineers or familiarity with Backend architectural patterns (microservices, API design).

  13. ]]>
    Why us?
  14. Hybrid work model
  15. 30 vacation days
  16. Learning budget
  17. Regular team and company events
  18. Latest hardware of your choice
  19. Pet-friendly Berlin office

  20. ]]>
    Recruiting Process
  21. HybridStep 1: Short 30-min take-home technical quiz.

  22. Step 2: Value-based interview (30 mins).

  23. Step 3: Deep-dive technical interview (1.5 hours) with the Data team.

  24. Step 4: Practical data-crunching challenge.

  25. Step 5: Team Meet & Greet — the final step to ensure we’re a great fit for each other.

  26. ]]>
    Work model

    (Senior) Data Engineer (m/f/d)

    adsquare · Berlin

    Apply for this role

    We use analytics cookies (Umami, Vercel) and a feedback widget (Userback) to improve JobLyst. You can accept or reject non-essential cookies. Cookie policy