By entering Serempre under the SA modality, you will find different advantages that will allow you to grow professionally:
You will work with our international clients, interacting with multicultural teams and innovative projects.
You will have the opportunity to communicate in English, thus enhancing your skills in a second language.
Remote work.
You will come across different cutting-edge technologies.
Collaborate with cross-functional teams with data science, analytics, and product teams to understand data requirements and deliver tailored data solutions while translating technical complexities for non-technical stakeholders.
Learn something new every day while working with colleagues and partners from different cultural backgrounds; you will observe various projects and processes, including work methodologies, task management, emerging tools and best practices.
Share knowledge with other professionals, providing leadership or mentorship to junior team members.
Design and Develop Data Pipelines: Build, optimize, and maintain reliable, scalable, and efficient data pipelines for both batch and real-time data processing.
Develop and maintain a data strategy aligned with business objectives, ensuring data infrastructure supports current and future needs.
Evaluate and implement the latest data engineering tools and technologies that will best serve our needs, balancing innovation with practicality.
Regularly review, refine, and optimize SQL queries across different systems to maintain peak performance.
Identify and address bottlenecks, query performance issues, and resource utilization.
Manage and maintain production AWS RDS MySQL, Aurora and Postgres databases, replicas ensuring their reliability and availability.
Perform routine database operations, including backups, restores, and disaster recovery planning.
Implement monitoring solutions to ensure high availability and troubleshoot data pipeline issues in real-time.
Maintain comprehensive documentation of systems, pipelines, and processes for easy onboarding and collaboration.
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field with 5+ years of experience in data engineering.
Deep understanding of data engineering concepts, including ETL/ELT processes, data warehousing, big data technologies, and cloud platforms (e.g., AWS, Azure, GCP).
Proficiency in programming languages such as Python, Scala, or Java, and experience with SQL and NoSQL databases.
Knowledge of best practices in cloud database administration including parameter tuning, backup, capacity management and performance tuning.
Strong experience in designing and implementing data architectures, including real-time data processing, data lakes, and data warehouses.
Hands-on experience with data engineering tools such as Apache Spark, Kafka, Snowflake, Airflow, Databricks and modern data orchestration frameworks.
We value someone data-driven and 100% detail- and problem-solving-oriented, focusing on data quality, governance, and scalable architecture, ensuring reliable, high-performance data pipelines.
This job opening is located in Mexico City, Bogota, or Sao Paulo in a Hybrid scheme (3 times on site / 2 remote) with flexible schedules.