AI & Data Consultant – Accenture
Accenture’s AI & Data practice, the people who love using data to tell a story. We’re also the world’s largest team of data scientists, data engineers, and experts in machine learning and AI. A great day for us? Solving big problems using the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything—spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds?
About the Role
We are looking for an experienced and motivated Data Consultant to help design, build, and optimize scalable data systems. This role focuses on developing robust data pipelines, enabling real‑time data ingestion and processing, and supporting data‑driven decision‑making. You’ll work with cutting‑edge technologies including Apache Spark, Databricks, Kafka, and cloud platforms like GCP and Azure.
As a subject‑matter expert, you will lead by example guiding technical decisions, mentoring team members, and collaborating with cross‑functional teams to deliver high‑impact solutions. Your contributions will help shape data architecture and ensure the availability, reliability, and performance of data infrastructure.
Key Responsibilities
- Design and develop end‑to‑end data pipelines, including real‑time streaming and batch processing.
- Build scalable and efficient solutions using Apache Spark, Databricks, and Kafka.
- Implement ETL / ELT processes to collect, transform, and load data across diverse systems.
- Ensure data quality, consistency, and integrity through validation frameworks and monitoring tools.
- Optimize pipeline performance and scalability across cloud platforms (GCP and Azure).
- Collaborate with engineering, analytics, and business teams to support data needs.
- Lead technical discussions, contribute to architectural decisions, and mentor junior engineers.
- Stay current with emerging tools, frameworks, and best practices in the data engineering space.
Qualifications
- Bachelor’s Degree or completion of college diploma in a related field.
- 5+ years’ experience in Apache Spark, Databricks, and building real‑time data pipelines using Kafka.
- Strong command of ETL / ELT workflows, data access patterns, and pipeline orchestration, with a track record of deploying and managing scalable solutions in cloud environments like GCP and Azure.
- 3+ years proficiency in Python, Spark and SQL is essential, along with a solid understanding of data quality management, performance tuning, and collaborative problem‑solving.
- 3+ years’ experience in leading or significantly contributing to complex technical initiatives.
- English is required for this position as this role will regularly interact with English‑speaking stakeholders across Canada.
Preferred Qualifications
- Familiarity with orchestration tools.
- Understanding of data warehousing concepts and architecture.
- Strong communication skills and the ability to work cross‑functionally.
Additional Information
- Seniority level: Mid‑Senior level
- Employment type: Full‑time
- Job function: Information Technology
- Industries: Business Consulting and Services
#J-18808-Ljbffr
Vous devez être connecté pour pouvoir ajouter un emploi aux favoris
Connexion ou Créez un compte