Synechron

Synechron

Sr. Data Engineer

Clarksburg, West Virginia, USFull-time$60k - $80k/YEARYesterday

Company

Synechron

Job type

Full-time

Location

Clarksburg, West Virginia, US

Posted

Yesterday

Salary

$60k - $80k/YEAR
Database Architect Resume Example

See a professional resume example for this role with key skills and ATS-friendly formatting.

View example

Tailor Your Resume to This Job

Mokaru reads this job description and creates a tailored resume for you, ready to send.

Create tailored resume

Job description

About Synechron

Synechron is a global technology consulting firm that helps leading organizations accelerate digital transformation through innovation, expertise and agility. With more than 16,500 professionals across around 60 offices in 20 countries, we combine deep industry knowledge with advanced capabilities in such areas as AI, cloud, cybersecurity, and data engineering.

Our regional teams, supported by strategic delivery centers, provide scalable, cost‑efficient solutions tailored to local markets. Through our award‑winning Synechron Fin Labs accelerators and strategic partnerships with AWS, Microsoft, Databricks, Salesforce, and Service Now, we enable clients to innovate fast and lead with confidence. For more information on the company, please visit our website or Linked In community.

We are seeking a highly skilled Data Engineer with strong expertise in AWS services, ETL pipelines, and modern data lakehouse architectures . The ideal candidate will have hands‑on experience with Redshift, Iceberg tables, schema/catalog management , and orchestration tools, while ensuring data security, governance, and compliance. Key Responsibilities

  • Design, build, and maintain ETL pipelines for large‑scale data processing.
  • Work with AWS services including Kinesis, Aurora, Athena, DMS, Data Sync, S3, ECS, Lambda .
  • Implement and manage Redshift data warehouses and Iceberg tables within a data lakehouse environment.
  • Manage catalogs and schemas to ensure data consistency and discoverability.
  • Utilize job scheduling and pipeline orchestration tools to automate workflows.
  • Ensure data encryption, privacy, and governance frameworks are applied across all solutions.
  • Collaborate with cross‑functional teams to deliver scalable and secure data solutions.
  • Use Git Hub for version control, including branches, pull requests, and code reviews. Required Skills & Experience
  • Strong experience with AWS data services (Kinesis, Aurora, Athena, DMS, Data Sync).
  • Hands‑on expertise in ETL development and pipeline orchestration .
  • Proficiency with Redshift and Iceberg tables in a data lakehouse architecture .
  • Solid understanding of catalog/schema management and metadata‑driven frameworks.
  • Experience with data encryption, privacy, and governance frameworks .
  • Strong knowledge of Git Hub workflows (branches, PRs, code reviews).
  • Excellent problem‑solving skills and ability to work in collaborative environments.

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company.

We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice #J-18808-Ljbffr

Responsibilities

  • , and orchestration tools, while ensuring data security, governance, and compliance
  • Design, build, and maintain ETL pipelines for large‑scale data processing
  • Work with AWS services including Kinesis, Aurora, Athena, DMS, Data Sync, S3, ECS, Lambda
  • Implement and manage Redshift data warehouses and Iceberg tables within a data lakehouse environment
  • Manage catalogs and schemas to ensure data consistency and discoverability
  • Utilize job scheduling and pipeline orchestration tools to automate workflows
  • Ensure data encryption, privacy, and governance frameworks are applied across all solutions
  • Collaborate with cross‑functional teams to deliver scalable and secure data solutions
  • Use Git Hub for version control, including branches, pull requests, and code reviews

Qualifications

  • The ideal candidate will have hands‑on experience with Redshift, Iceberg tables, schema/catalog management
  • Required Skills & Experience
  • Strong experience with AWS data services (Kinesis, Aurora, Athena, DMS, Data Sync)
  • Hands‑on expertise in ETL development and pipeline orchestration
  • Proficiency with Redshift and Iceberg tables in a data lakehouse architecture
  • Solid understanding of catalog/schema management and metadata‑driven frameworks
  • Experience with data encryption, privacy, and governance frameworks
  • Strong knowledge of Git Hub workflows (branches, PRs, code reviews)
  • Excellent problem‑solving skills and ability to work in collaborative environments

Stand out from other applicants

AI reads this job description and tailors your resume to match, optimized for ATS filters.

Similar jobs

Ready to land your next role?

Join thousands of professionals who use Mokaru to manage their job search. AI-powered resume tailoring, application tracking, and more.

Create Free Resume