Data Engineer
2025-04-15T14:50:57+00:00
Raising The Village
https://www.greatugandajobs.com/jsjobsdata/data/employer/comp_2286/logo/Raising%20The%20Village.png
https://www.www.raisingthevillage.org/
FULL_TIME
Mbarara
Mbarara
00256
Uganda
Nonprofit, and NGO
Science & Engineering
2025-04-28T17:00:00+00:00
Uganda
8
Location: Mbarara
Supervisor: Senior Data Scientist/Designate
Years of experience: 3+ years
Department: Venn
Field Travel: 10%
About Raising The Village: At Raising The Village (RTV), we are dedicated to eradicating ultra-poverty in Sub-Saharan Africa. As a dynamic and rapidly growing international development organization, we have assembled a team of over 250 passionate individuals in Uganda, alongside an additional 12+ professionals in North America, and 14+ in Rwanda, with more expansion base in the Sub-Saharan Region. Together, we're committed to elevating communities out of ultra-poverty by implementing innovative solutions and leveraging advanced data analytics to drive impact. To date, our holistic approach has positively impacted over 1M+ lives, and we're poised to reach even greater milestones, aiming to reach over 1,000,000 participants annually by 2027. Our journey of growth and success is made possible by the invaluable support of our global partners who share our vision and belief in sustainable change. Learn more about our impactful programs at www.raisingthevillage.org
The Venn department is the data and technology backbone of our organization, connecting advanced analytics, and custom software tools with field implementation to ensure data-informed decision-making at every level.
Job Summary:
The Data Engineer will play a crucial role in the VENN department by designing, building, and maintaining scalable data pipelines, ensuring efficient data ingestion, storage, transformation, and retrieval. The role involves working with large-scale structured and unstructured data, optimizing workflows, and supporting analytics and decision-making.
The ideal candidate will have deep expertise in data pipeline orchestration, data modeling, data warehousing, and batch/stream processing. They will work closely with cross-functional teams to ensure data quality, governance, and security while enabling advanced analytics and AI-driven insights to support Raising The Village's mission to eradicate ultra-poverty.
Responsibilities:
Data Pipeline Development & Orchestration
- Design, develop, and maintain scalable ETL/ELT pipelines for efficient data movement and transformation.
- Develop and maintain workflow orchestration for automated data ingestion and transformation.
- Implement real-time and batch data processing solutions using appropriate frameworks and technologies.
- Monitor, troubleshoot, and optimize pipelines for performance and reliability.
Data Architecture & Storage
- Build and optimize data architectures, warehouses, and lakes to support analytics and reporting.
- Work with both cloud and on-prem environments to leverage appropriate storage and compute resources.
- Implement and maintain scalable and flexible data models that support business needs.
Data Quality, Security & Governance
- Ensure data integrity, quality, security, and compliance with internal standards and industry best practices.
- Support data governance activities, including metadata management and documentation to enhance usability and discoverability.
- Collaborate on data access policies and enforcement across the organization.
Cross-functional Collaboration & Solutioning
- Work closely with cross-functional teams (analytics, product, programs) to understand data needs and translate them into technical solutions.
- Support analytics and AI teams by providing clean, accessible, and well-structured data.
Innovation & Continuous Improvement
- Research emerging tools, frameworks, and data technologies that align with RTV's innovation goals.
- Contribute to DevOps workflows including CI/CD pipeline management for data infrastructure.
Qualifications and Requirements:
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. (Master's is a plus!)
- Experience: 4+ years of hands-on work in data engineering and building data pipelines.
- Programming: Strong in SQL and Python-you can clean, process, and move data like a pro.
- Data Tools: Experience using workflow tools like Airflow, Prefect, or Kestra.
- Data Transformation: Comfortable working with tools like DBT, Dataform, or similar.
- Data Systems: Hands-on with data lakes and data warehouses-you've worked with tools like BigQuery, Snowflake, Redshift, or S3.
- APIs: Able to build and work with APIs (e.g., REST, GraphQL) to share and access data.
- Processing: Know your way around batch processing tools like Apache Spark and real-time tools like Kafka or Flink.
- Data Design: Good understanding of data modeling, organization, and indexing to keep things fast and efficient.
- Databases: Familiar with both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases.
- Cloud: Experience with major cloud platforms like AWS, Google Cloud, or Azure.
- DevOps: Know your way around Docker, Terraform, Git, and CI/CD tools for smooth deployments and testing.
Skills & Abilities:
- Strong ability to design, implement, and optimize scalable data pipelines.
- Experience with data governance, security, and privacy best practices.
- Ability to work collaboratively and engage with diverse stakeholders.
- Strong problem-solving and troubleshooting skills.
- Ability to effectively manage conflicting priorities in a fast-paced environment.
- Strong documentation skills for technical reports and process documentation.
Data Pipeline Development & Orchestration Design, develop, and maintain scalable ETL/ELT pipelines for efficient data movement and transformation. Develop and maintain workflow orchestration for automated data ingestion and transformation. Implement real-time and batch data processing solutions using appropriate frameworks and technologies. Monitor, troubleshoot, and optimize pipelines for performance and reliability. Data Architecture & Storage Build and optimize data architectures, warehouses, and lakes to support analytics and reporting. Work with both cloud and on-prem environments to leverage appropriate storage and compute resources. Implement and maintain scalable and flexible data models that support business needs. Data Quality, Security & Governance Ensure data integrity, quality, security, and compliance with internal standards and industry best practices. Support data governance activities, including metadata management and documentation to enhance usability and discoverability. Collaborate on data access policies and enforcement across the organization. Cross-functional Collaboration & Solutioning Work closely with cross-functional teams (analytics, product, programs) to understand data needs and translate them into technical solutions. Support analytics and AI teams by providing clean, accessible, and well-structured data. Innovation & Continuous Improvement Research emerging tools, frameworks, and data technologies that align with RTV's innovation goals. Contribute to DevOps workflows including CI/CD pipeline management for data infrastructure.
Strong ability to design, implement, and optimize scalable data pipelines. Experience with data governance, security, and privacy best practices. Ability to work collaboratively and engage with diverse stakeholders. Strong problem-solving and troubleshooting skills. Ability to effectively manage conflicting priorities in a fast-paced environment. Strong documentation skills for technical reports and process documentation.
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. (Master's is a plus!) Experience: 4+ years of hands-on work in data engineering and building data pipelines. Programming: Strong in SQL and Python-you can clean, process, and move data like a pro. Data Tools: Experience using workflow tools like Airflow, Prefect, or Kestra. Data Transformation: Comfortable working with tools like DBT, Dataform, or similar. Data Systems: Hands-on with data lakes and data warehouses-you've worked with tools like BigQuery, Snowflake, Redshift, or S3. APIs: Able to build and work with APIs (e.g., REST, GraphQL) to share and access data. Processing: Know your way around batch processing tools like Apache Spark and real-time tools like Kafka or Flink. Data Design: Good understanding of data modeling, organization, and indexing to keep things fast and efficient. Databases: Familiar with both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases. Cloud: Experience with major cloud platforms like AWS, Google Cloud, or Azure. DevOps: Know your way around Docker, Terraform, Git, and CI/CD tools for smooth deployments and testing.
JOB-67fe725120b08
Vacancy title:
Data Engineer
[Type: FULL_TIME, Industry: Nonprofit, and NGO, Category: Science & Engineering]
Jobs at:
Raising The Village
Deadline of this Job:
Monday, April 28 2025
Duty Station:
Mbarara | Mbarara | Uganda
Summary
Date Posted: Tuesday, April 15 2025, Base Salary: Not Disclosed
Similar Jobs in Uganda
Learn more about Raising The Village
Raising The Village jobs in Uganda
JOB DETAILS:
Location: Mbarara
Supervisor: Senior Data Scientist/Designate
Years of experience: 3+ years
Department: Venn
Field Travel: 10%
About Raising The Village: At Raising The Village (RTV), we are dedicated to eradicating ultra-poverty in Sub-Saharan Africa. As a dynamic and rapidly growing international development organization, we have assembled a team of over 250 passionate individuals in Uganda, alongside an additional 12+ professionals in North America, and 14+ in Rwanda, with more expansion base in the Sub-Saharan Region. Together, we're committed to elevating communities out of ultra-poverty by implementing innovative solutions and leveraging advanced data analytics to drive impact. To date, our holistic approach has positively impacted over 1M+ lives, and we're poised to reach even greater milestones, aiming to reach over 1,000,000 participants annually by 2027. Our journey of growth and success is made possible by the invaluable support of our global partners who share our vision and belief in sustainable change. Learn more about our impactful programs at www.raisingthevillage.org
The Venn department is the data and technology backbone of our organization, connecting advanced analytics, and custom software tools with field implementation to ensure data-informed decision-making at every level.
Job Summary:
The Data Engineer will play a crucial role in the VENN department by designing, building, and maintaining scalable data pipelines, ensuring efficient data ingestion, storage, transformation, and retrieval. The role involves working with large-scale structured and unstructured data, optimizing workflows, and supporting analytics and decision-making.
The ideal candidate will have deep expertise in data pipeline orchestration, data modeling, data warehousing, and batch/stream processing. They will work closely with cross-functional teams to ensure data quality, governance, and security while enabling advanced analytics and AI-driven insights to support Raising The Village's mission to eradicate ultra-poverty.
Responsibilities:
Data Pipeline Development & Orchestration
- Design, develop, and maintain scalable ETL/ELT pipelines for efficient data movement and transformation.
- Develop and maintain workflow orchestration for automated data ingestion and transformation.
- Implement real-time and batch data processing solutions using appropriate frameworks and technologies.
- Monitor, troubleshoot, and optimize pipelines for performance and reliability.
Data Architecture & Storage
- Build and optimize data architectures, warehouses, and lakes to support analytics and reporting.
- Work with both cloud and on-prem environments to leverage appropriate storage and compute resources.
- Implement and maintain scalable and flexible data models that support business needs.
Data Quality, Security & Governance
- Ensure data integrity, quality, security, and compliance with internal standards and industry best practices.
- Support data governance activities, including metadata management and documentation to enhance usability and discoverability.
- Collaborate on data access policies and enforcement across the organization.
Cross-functional Collaboration & Solutioning
- Work closely with cross-functional teams (analytics, product, programs) to understand data needs and translate them into technical solutions.
- Support analytics and AI teams by providing clean, accessible, and well-structured data.
Innovation & Continuous Improvement
- Research emerging tools, frameworks, and data technologies that align with RTV's innovation goals.
- Contribute to DevOps workflows including CI/CD pipeline management for data infrastructure.
Qualifications and Requirements:
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. (Master's is a plus!)
- Experience: 4+ years of hands-on work in data engineering and building data pipelines.
- Programming: Strong in SQL and Python-you can clean, process, and move data like a pro.
- Data Tools: Experience using workflow tools like Airflow, Prefect, or Kestra.
- Data Transformation: Comfortable working with tools like DBT, Dataform, or similar.
- Data Systems: Hands-on with data lakes and data warehouses-you've worked with tools like BigQuery, Snowflake, Redshift, or S3.
- APIs: Able to build and work with APIs (e.g., REST, GraphQL) to share and access data.
- Processing: Know your way around batch processing tools like Apache Spark and real-time tools like Kafka or Flink.
- Data Design: Good understanding of data modeling, organization, and indexing to keep things fast and efficient.
- Databases: Familiar with both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases.
- Cloud: Experience with major cloud platforms like AWS, Google Cloud, or Azure.
- DevOps: Know your way around Docker, Terraform, Git, and CI/CD tools for smooth deployments and testing.
Skills & Abilities:
- Strong ability to design, implement, and optimize scalable data pipelines.
- Experience with data governance, security, and privacy best practices.
- Ability to work collaboratively and engage with diverse stakeholders.
- Strong problem-solving and troubleshooting skills.
- Ability to effectively manage conflicting priorities in a fast-paced environment.
- Strong documentation skills for technical reports and process documentation.
Work Hours: 8
Experience in Months: 48
Level of Education: bachelor degree
Job application procedure
All Jobs | QUICK ALERT SUBSCRIPTION