101 Jobs für Data Engineering in Deutschland
Trainee Data Engineering
Vor 7 Tagen gepostet
Job angesehen
Arbeitsbeschreibung
Ihre Aufgaben als Trainee:
- Unterstützung bei der Entwicklung und Implementierung von Datenpipelines zur Extraktion, Transformation und Laden (ETL) von Daten aus verschiedenen Quellen.
- Mitarbeit bei der Gestaltung und Wartung von Datenbanken und Data Warehouses.
- Erlernen und Anwenden von Tools und Technologien im Bereich Big Data (z.B. Hadoop, Spark).
- Unterstützung des Data-Engineering-Teams bei der Fehlerbehebung und Performance-Optimierung von Datensystemen.
- Durchführung von Datenqualitätsprüfungen und Erstellung von Dokumentationen.
- Zusammenarbeit mit Data Scientists und Analysten, um deren Datenanforderungen zu verstehen und zu erfüllen.
- Teilnahme an Schulungen und Weiterbildungsprogrammen, um Ihre technischen Fähigkeiten zu erweitern.
- Einblick in agile Entwicklungsmethoden und Best Practices im Software Engineering.
- Abgeschlossenes Hochschulstudium (Bachelor oder Master) in Informatik, Wirtschaftsinformatik, Data Science, Ingenieurwesen oder einem verwandten quantitativen Fach.
- Grundkenntnisse in mindestens einer Programmiersprache (z.B. Python, Java, Scala).
- Grundlegendes Verständnis von Datenbanken und SQL.
- Starkes Interesse an Daten, Datenarchitekturen und Big-Data-Technologien.
- Analytisches Denkvermögen und eine hohe Problemlösungskompetenz.
- Teamfähigkeit, Lernbereitschaft und eine proaktive Arbeitsweise.
- Sehr gute Deutschkenntnisse; gute Englischkenntnisse sind von Vorteil.
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Trainee Data Engineering
Vor 18 Tagen gepostet
Job angesehen
Arbeitsbeschreibung
Während Ihres Trainee-Programms werden Sie tiefgreifende Einblicke in den Aufbau und die Wartung von Dateninfrastrukturen gewinnen. Sie lernen, wie Daten gesammelt, gespeichert, transformiert und für analytische Zwecke aufbereitet werden. Dies beinhaltet die Arbeit mit verschiedenen Datenbanktechnologien (SQL und NoSQL), die Implementierung von ETL-Prozessen (Extract, Transform, Load) und die Sicherstellung der Datenqualität und -sicherheit. Sie werden in die Entwicklung und Optimierung von Datenpipelines involviert sein und lernen, wie man Big-Data-Plattformen nutzt. Darüber hinaus werden Sie mit Tools und Technologien im Bereich Big Data und Cloud-Computing vertraut gemacht. Die Arbeit in agilen Teams und die enge Zusammenarbeit mit Data Scientists und Analysten sind ebenfalls fester Bestandteil des Programms. Sie erhalten die Möglichkeit, an spannenden Projekten mitzuwirken, die direkt zur Wertschöpfung unseres Kunden beitragen.
Wir suchen engagierte und neugierige Persönlichkeiten mit einem abgeschlossenen Studium (Bachelor oder Master) in Informatik, Wirtschaftsinformatik, Mathematik, Ingenieurwesen oder einem vergleichbaren MINT-Fach. Erste praktische Erfahrungen oder Projekte im Bereich Datenverarbeitung, Programmierung oder Datenbanken sind von Vorteil, aber keine zwingende Voraussetzung. Sie sollten eine hohe Affinität zu Daten und Technologie mitbringen und Freude am Erlernen neuer Dinge haben. Gute Kenntnisse in mindestens einer Programmiersprache (z.B. Python, Java, Scala) sind wünschenswert. Grundlegendes Verständnis von SQL ist von Vorteil. Analytisches Denkvermögen, Problemlösungskompetenz und die Fähigkeit, sowohl eigenständig als auch im Team zu arbeiten, sind essenziell. Ein gutes Verständnis für technische Zusammenhänge und die Bereitschaft, sich kontinuierlich weiterzubilden, runden Ihr Profil ab.
Dieses Trainee-Programm ist ein idealer Einstieg für Ihre Karriere im IT-Bereich. Wir bieten ein strukturiertes Ausbildungsprogramm, Mentoring durch erfahrene Kollegen, die Möglichkeit zur Weiterbildung und eine gute Work-Life-Balance mit einem hybriden Arbeitsmodell. Nach erfolgreichem Abschluss des Programms besteht die Möglichkeit zur Übernahme in eine Festanstellung. Wenn Sie bereit sind, Ihre Karriere im Data Engineering zu starten und Teil eines dynamischen Unternehmens zu werden, freuen wir uns auf Ihre Bewerbung.
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Applikationsentwickler:in - Python & Data Engineering
Heute
Job angesehen
Arbeitsbeschreibung
.
Für unseren Kunden aus der Versicherungsbranche in Bern , suchen wir eine:n erfahrene:n, motivierte:n und aufgeschlossene:n Applikations-Entwickler:in - Python & Data Engineering.
Ihre Aufgaben:
- Entwicklung und Wartung skalierbarer, effizienter Backend-Lösungen mit Python und PySpark in einer Databricks -Umgebung
- Erweiterung und Pflege bestehender Backend-Komponenten (z. B. Transformations- und Test-Engine)
- Implementierung von Unit-Tests mit hoher Testabdeckung sowie Integration in CI/CD-Pipelines (z.B. GitLab) im Rahmen eines trunk-basierten Entwicklungsprozesses
- Mitarbeit in einem vielseitigen technischen Umfeld mit REST-APIs , Oracle-Datenbanken , Dateiimporten und Docker-Containern
- Automatisierung von Datenvorbereitungsprozessen und lokalen Workflows mit Dagster (Kenntnisse in vergleichbaren Orchestrierungstools wie Airflow oder Databricks Workflows von Vorteil)
- Identifikation technischer Lücken und Erarbeitung operativer Stories zur kontinuierlichen Verbesserung
- Übersetzung fachlicher Anforderungen in technische Spezifikationen, Dokumentation und User Stories
- Aktive Mitarbeit im agilen Scrum-Team , inklusive Teilnahme an Code-Reviews, technischen Diskussionen und DevOps-Aktivitäten
- Unterstützung im laufenden Betrieb sowie bei der Weiterentwicklung der Datenplattform in einer Cloud-Umgebung (Azure von Vorteil)
Ihr Profil:
- Abgeschlossenes Studium in Informatik, Data Engineering oder vergleichbare Ausbildung
- Mehrjährige Erfahrung in der Software- oder Datenentwicklung mit Python (idealerweise in Kombination mit PySpark und Databricks)
- Sicherer Umgang mit CI/CD-Pipelines , Unit-Testing und Versionierung (GitLab oder vergleichbar)
- Gute Kenntnisse in SQL und Datenmodellierung
- Erfahrung mit Cloud-Technologien (Azure, AWS oder GCP von Vorteil)
- Verständnis moderner ETL/ELT-Konzepte und Datenarchitekturen
- Strukturierte, analytische und lösungsorientierte Arbeitsweise
- Freude an Teamarbeit , agilen Methoden und der kontinuierlichen Verbesserung von Prozessen
- Fliessende Deutsch - und Englischkenntnisse
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Head of PT Data Engineering
 
                        Heute
Job angesehen
Arbeitsbeschreibung
**The Position**
The Pharma Technical Operations (PT) department is establishing the One PT Data Office to serve as the strategic center for data governance, strategy, and enablement across the entire global PT network. This team is at the heart of our digital transformation, responsible for architecting and leading a central data office to unlock the full potential of PT's data assets.
The Head of PT Data Engineering will be instrumental in building the robust data backbone that powers PT's digital transformation and data driven decision making. Reporting into the One PT Data Office, this critical role is accountable for leading a cutting-edge internal and external global data engineering team. You will define the strategy, evolve the data platforms and processes, and oversee the delivery of scalable, high-quality data products to enable advanced analytics, AI initiatives, and critical business processes across Pharma Technical Operations. You will lead a critical team of internal and external data engineers, fostering a culture of technical excellence, innovation, and continuous delivery. This pivotal role requires a visionary leader to build and manage the foundational data infrastructure, pipelines, and platforms that enable the seamless flow of high-quality, FAIR data from diverse sources to data consumers, ensuring compliance, scalability, and future readiness for PT's ambitious digital agenda.
**The Opportunity**
+ Provide strategic leadership and vision for PT's global data engineering capabilities, defining the roadmap for data ingestion, transformation, storage, and consumption architectures.
+ Accountable for the design, development, and evolution of scalable, robust, and cost-effective data platforms (e.g., data lakes, data warehouses, streaming platforms) that support PT's advanced analytics, AI/ML, and data product needs.
+ Define and implement best practices, standards, and guidelines for data modeling, ETL/ELT processes, data quality, and data pipeline orchestration across the PT landscape.
+ Actively monitor and integrate cutting-edge industry trends, emerging data engineering technologies, and cloud-native solutions to continually optimize PT's data infrastructure in close collaboration with IT.
+ Build, mentor, mobilize, and empower a high-performing, global team of internal and external data engineers, fostering a culture of technical excellence, innovation, and agile delivery.
+ Accountable for the end-to-end delivery and operational excellence of critical data pipelines, ensuring timely, accurate, and reliable data availability for PT's business processes and analytical use cases.
+ Ensure data infrastructure and pipelines adhere to strict quality, security, and compliance standards (e.g., GxP, data integrity, data privacy), collaborating closely with Data Governance and Cybersecurity teams.
+ Drive the automation and optimization of data engineering workflows to enhance efficiency, reduce manual effort, and improve data freshness.
**Who You Are**
+ 12+ years of progressive experience in data engineering, data platform architecture, or related roles within a complex, global enterprise, preferably in life sciences/pharma and 7+ years of senior leadership experience, specifically building, developing, and leading large, global teams of data engineers.
+ Proven track record of successfully designing, implementing, and scaling robust data pipelines and cloud-based data platforms (AWS, Azure, GCP data services) for advanced analytics and AI/ML.
+ Expert-level knowledge of modern data architectures, ETL/ELT, data orchestration, and data quality management.
+ Strong understanding of GxP, data integrity, and data privacy regulations in a manufacturing context.
+ Exceptional strategic thinking, communication, and influencing skills to lead and align diverse stakeholders globally.
+ Bachelor's degree in a relevant technical field required; Master's or advanced certifications are highly advantageous.
Ready for the next step? We look forward to hearing from you. Apply now to discover this exciting opportunity!
**Who we are**
A healthier future drives us to innovate. Together, more than 100'000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact.
Let's build a healthier future, together.
**Roche is an Equal Opportunity Employer.**
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Consultant Data Engineering (all genders)
Heute
Job angesehen
Arbeitsbeschreibung
Allgemein:
Auf Basis unserer Customer Experience Plattform entwirfst und realisiert Du die technische Grundlage für die Durchführung personalisierter Marketingkampagnen.  Dabei begleitest Du Deine Projekte vom Kick-Off bis zum erfolgreichen Go-Live und bist Ansprechpartner:in für Marketing- und IT-Abteilungen unserer Kunden. Mit modernen Cloud-Technologien entwickelst Du – KI-gestützt – kundenindividuelle Lösungen.  
Mit Deiner Projektexpertise bringst Du die innovative Weiterentwicklung unserer Plattform voran. 
- Du entwickelst Datenmodelle unter Berücksichtigung der Logiken unserer CXP-Platform, der Datenlieferungen sowie der Anforderungen an die geplante Datenverwendung
- Du konzipierst und setzt ETL-Prozesse um, modellierst Batch- und Neartime-Strecken und beachtest dabei Vorgaben wie Ladehäufigkeit, Datenvolumen und Latenzzeiten
- Du triffst Technologieentscheidungen, indem Du geeignete Lösungen auswählst – etwa zwischen Batch- und Realtime-Verarbeitung oder der Nutzung von Databricks versus Data Factory
- In enger Zusammenarbeit mit angrenzenden Produktteams erarbeitest Du Lösungen im Zusammenspiel mit Marketing Automation- und Customer Analytics-Systemen und stellst die Verfügbarkeit und Qualität der Daten für verschiedene Produktanforderungen sicher
- Erste relevante Berufserfahrung
- Grundkenntnisse der Softwareentwicklung (idealerweise in C#, Python und Pyspark)
- SQL-Kenntnisse zur Datenanalyse und Qualitätssicherung
- Verständnis für Cloud-Architekturen und Datenverarbeitungsprozesse
- Interesse an konzeptionellen Themen rund um Datenmodellierung und ETL-Design
- Bereitschaft zur Weiterentwicklung in Technologien wie Databricks und Data Factory
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Data Engineering Manager (Hands-On)
Vor 5 Tagen gepostet
Job angesehen
Arbeitsbeschreibung
What is Embat?
Embat is one of the fastest-growing startups in Europe, founded in 2021 to revolutionize how medium and large-sized enterprises manage their finances and cash flow in real time. Our mission is to create a SaaS product that uses advanced technology and delivers excellent customer experiences, allowing our clients to make faster, better-informed business decisions, all while improving efficiency by over 100x compared to Excel.
We are a diverse team of 100+ members, backed by leading international investors, aiming to go global and make Embat a fantastic place to work. In February 2024, we completed our third funding round of €15 million, led by the renowned international fund Creandum, alongside partners Samaipata, 4Founders Capital, and VentureFriends.
What are we looking for?
Embat is in a significant growth phase and needs an experienced and talented Data Engineering Manager for the Data Team.
This is not just a people management role: we are looking for a hands-on technical leader who thrives on solving complex challenges, driving strategy, and building a strong, scalable data culture across the organization.
As the Data Engineering Manager , you will combine people leadership, delivery ownership, and technical guidance. Your responsibilities will extend beyond managing tasks to cultivating a high-performing, healthy team that consistently delivers high-quality software while aligning with Embat's strategic goals.
What will you do?
- Team Leadership & People Development : Inspire, mentor, and coach individual data engineers and analysts. You will foster a supportive and high-performing environment through regular 1:1s, career development discussions, and constructive feedback.
- Hands-on Technical Leadership & Strategy : Actively contribute as an individual contributor while guiding the team’s data strategy. You will design, build and oversee reliable, performant, and secure data solutions while championing a data-driven culture.
- Delivery & Execution Excellence : Own the planning, execution, and successful delivery of initiatives. Anticipate challenges, remove blockers, and ensure efficient, timely, and high-quality releases.
- Cross-Functional Collaboration : Act as a critical bridge between the data team and other functions (Product, Design, Engineering). Translate complex technical concepts into clear communication for all stakeholders.
What are we looking for?
- 6+ years of professional experience in Data Engineering, with at least 2+ years in a leadership or management role.
- Strong hands-on technical skills in data engineering, analytics, and architecture (SQL, Python, ETL pipelines, data warehousing, cloud platforms).
- Experience designing and scaling data platforms with strong knowledge of data modeling, data quality, and governance .
- Track record of building, mentoring, and scaling high-performing teams.
- A builder’s mindset: you take ownership, create impact, and feel proud and accountable for the team’s success.
- Strong communicator with the ability to bridge technical and business discussions.
- Ambitious, curious, and passionate about creating impact through data.
- Languages: Working proficiency in English & fluent in Spanish.
- Location: preferably in Madrid or elsewhere in Spain.
Additionally, any of the below would definitely be great:
- Experience with modern data stacks : dbt, Airflow, Snowflake/BigQuery/Redshift, Kafka, GCP.
- Knowledge of CI/CD processes, observability, and best practices for data engineering.
- Previous experience in fintech or financial services.
- International experience and additional languages. What comes with working at Embat? 
 
We offer a platform that allows you to reach your professional and personal goals. We pride ourselves on working in an evolving and agile environment, with day-to-day interaction with every member of the team. No bureaucracy or hierarchy. We rather give you space to lead, build, collaborate and create value to our users. Additionally, being an Embat(ier) also comes with the following benefits:
- A competitive salary according to the project and responsibility.
- Hybrid working setup & flexible schedule
- Latest technology of your choice to do your impactful work with.
- Access to private health insurance with Sanitas
- Access to salary on demand, restaurant card, transport card, and kindergarden checks through Payflow
- English Classes
- Career progression - we are a small team with great ambitions.
- An opportunity to work hand in hand with our founders who built their careers in Investment Banking at J.P. Morgan for more than a decade.
- 360º development - through internal and external talks, sponsored conferences and many more to come.
- Twice-yearly performance reviews
- Team Buildings plans
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Lead Data Engineering - Data Production (m/f/d)
Vor 15 Tagen gepostet
Job angesehen
Arbeitsbeschreibung
At Statista , we're all about facts and data, for we are the world's leading business data platform. By providing reliable and easy-to-use data as well as various data analytics products and services, we empower people worldwide to make fact-based decisions.
Founded in Hamburg in 2007, we have quickly grown into a global company with offices in major cities such as London, New York, Berlin and Tokyo. And we still have a lot of plans. Our constant growth does not only prove our success, but also keeps creating new development and career opportunities for our employees.
We value and celebrate our diverse culture. You are welcome here for who you are, no matter where you come from, what you look like, or whether you prefer bar graphs to pie charts. Your story matters – keep writing it as part of our team.
Are you ready to join us?
bit.ly/3NGPzQQPasst dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Seien Sie der Erste, der es erfährt
Über das Neueste Data engineering Jobs In Deutschland !
Lead Data Engineering - Data Production (m/f/d)
Heute
Job angesehen
Arbeitsbeschreibung
At Statista , we're all about facts and data, for we are the world's leading business data platform. By providing reliable and easy-to-use data as well as various data analytics products and services, we empower people worldwide to make fact-based decisions.
Founded in Hamburg in 2007, we have quickly grown into a global company with offices in major cities such as London, New York, Berlin and Tokyo. And we still have a lot of plans. Our constant growth does not only prove our success, but also keeps creating new development and career opportunities for our employees.
We value and celebrate our diverse culture. You are welcome here for who you are, no matter where you come from, what you look like, or whether you prefer bar graphs to pie charts. Your story matters – keep writing it as part of our team.
Are you ready to join us?
- Contribute to Statista's Data Production mission by building an automated, at-scale production machine for high-value data that empowers corporates and AI-builders worldwide. 
- Lead your team both disciplinarily and professionally, setting objectives, managing delivery, and actively contributing to high-priority projects. 
- Oversee the extraction of data from multiple sources at scale, leveraging automation and AI tools where applicable, and ensure data is structured, processed, and ready for further use or publication. 
- Build and maintain scalable data pipelines and orchestration workflows to support both data production and content publishing processes. 
- Monitor and optimize the performance, reliability, and security of data infrastructure, ensuring high availability across all systems. 
- Collaborate with internal stakeholders to understand requirements, provide automated sourcing solutions, and facilitate large-scale data ingestion and publishing. 
- Participate in strategic decisions for data production, value contribution, AI & automation initiatives 
- Continuously analyze established processes and identify opportunities for automation and optimization across production and publishing workflows. 
- 5+ years of experience in Data Engineering, Data Operations, or a related field, with a strong track record of leading technical teams and delivering complex data solutions. 
- Demonstrated experience in building and scaling data pipelines, ETL processes, and automated workflows in production environments. 
- Strong Python skills for developing scalable automation, backend services, and data processing pipelines. 
- Proficiency in workflow orchestration, containerization, and version control tools (e.g., Apache Airflow, Docker, git). 
- Expert command of SQL for querying, manipulation, and optimization of large datasets. 
- Deep understanding of data modeling and architecture, with a proven ability to implement best practices for robust, scalable, and maintainable systems. 
- Hands-on experience with cloud infrastructure (AWS preferred: ECS, EC2, RDS, S3), including deployment, monitoring, and management; Infrastructure-as-Code is a plus. 
- Track record of strategic thinking and execution, able to align data production and publishing initiatives with business goals. 
In addition to our great team, culture, and our shared goal of empowering people with data, there are many other things that make Statista a great place to work! Join us and benefit from:
- Work from abroad 10 days a year (up to 30 if your family lives abroad) 
- Hybrid work and flex-time 
- International team and social events 
- Career & training opportunities 
- Attractive locations and modern offices 
- Mental health support by OpenUp 
Some of the benefits listed here apply only to the German entity and to Junior-level roles or above.
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Trainee im Bereich Data Engineering (Remote)
Vor 9 Tagen gepostet
Job angesehen
Arbeitsbeschreibung
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                 
            
        
            Senior Data Engineering Consultant (German-speaking)
Heute
Job angesehen
Arbeitsbeschreibung
At Machine Learning Architects Basel (MLAB), we assist and empower people and organizations in designing, building, and operating reliable data and machine learning solutions. In doing so, our data and AI journeys and effective solution patterns enable our customers to operationalize, scale, and continuously deliver data and AI products beyond the pilot and prototype stages . These patterns and frameworks revolve not only around the latest technologies but also consider role, skills, and process adjustments. We thereby:
- Help our customers realize the full potential of data and AI solutions, from use case identification, over data, and ML platform implementation to integration and testing operation of ML models, LLMs, and other GenAI solutions.
- Design, test, integrate and operate data, model and code pipelines, and end-to-end data/ML/LLM systems (DataOps, MLOps & DevOps).
- Enable technical and non-technical teams and individuals to leverage data science and management, data, ML, and reliability engineering in an end-to-end fashion.
Do you want to contribute to our dynamic and growing services company with your Machine Learning, AI, and Software Engineering knowledge? Do you want to act as a thought leader and trusted advisor in the field of Data Products and Data Mesh ?
We are looking for a German-speaking Senior Data Engineering Consultant who will be involved in the whole lifecycle of projects, both internally and externally:
- Consulting, Engineering & Training : You perceive data, software, and AI engineering as key capabilities for mastering the challenges of our clients' digital transformations, want to help them understand both their potential and their limitations, and deliver impactful, valuable services.
- Requirement Analysis : You analyze customer requirements and identify and define best-fit solutions.
- Implementation of Data Pipelines and Platforms, ML/LLM Integrations, Reliability Engineering & Operationalization : You understand how to successfully deliver data projects from the prototype or pilot phase into production, design, build, integrate and test data pipelines and platforms, and implement engineering best practices such as traceability, reliability, scalability, measurability, and automation within a demanding project and technology environment.
- Concept Development : You contribute to our solution blueprints and concepts (e.g., our journey for ‘Reliable Data Products & Efficient Data Meshes’ ).
- Expertise & Thought Leadership : You strive to become an expert and a trusted advisor in the field of Data Platforms, Data Products, and DataOps
- Ownership, Communication, Knowledge Sharing & Teamwork: You take ownership of your work, present your results to various stakeholders, share your knowledge, and collaborate (pro-)actively with our and your client’s teams.
-  Professional experience (minimum 5 years) as a Data or Software Engineer with a focus on data and ML systems. 
-  Experience with and, ideally certified in major data and AI platforms (e.g. Snowflake, Databricks, AWS, Azure, MS Fabric). 
-  Familiarity with data analytics and DataOps best practices, as well as topics such as Data Mesh, Data Lake/Warehouses, and Reliability Engineering. 
-  Understanding and strong interest in the end-to-end life cycle of projects, code, model, and data pipelines, and working with various stakeholders. 
-  Technical, hands-on experience with at least some of the following: 
-  Programming languages 
-  Distributed systems (Hadoop, Spark) and data structures. 
-  SQL and NoSQL databases. 
-  Cloud Services. 
-  REST API and microservices. 
-  Docker and knowledge of Kubernetes. 
-  Agile development methods and CI/CD. 
-  Experience working in a client-facing or consulting role. 
-  Fluency in German and English (written and spoken). 
-  Swiss passport or a valid EU/EFTA work permit. 
- A young and dynamic services company with an experienced, knowledgeable, and passionate team.
- An entrepreneurial environment and the chance to have a real impact on the company’s development and growth.
- Work on cutting-edge data, AI, and analytics topics that have a real impact across industries.
- A culture that is both performance-oriented and customer-driven and at the same time team-oriented, friendly, and supportive, incl. regular knowledge-sharing sessions and team events
- A hybrid working model with flexibility as long as both client (of which most require onsite presence) and internal commitments (i.e., one team office day per week) are met.
Passt dieser Job oder ist er ein Fehlschlag?
 
            
        
                                
            
                