Vibotek LLC

Vibotek LLC

Senior/Lead Data Engineer

Alpharetta, Georgia, USFull-time1 week agovia LinkedIn

Salary

-

Job type

Full-time

Location

Alpharetta, Georgia, US

Remote

No

Posted

1 week ago

Database Architect Resume Example

See a professional resume example for this role with key skills, action verbs, and ATS-friendly formatting.

View resume example

Job description

Location - Alpharetta

Required Citizenship / Work Permit / Visa Status

US Citizens, green card holder

Must-Haves

8+ years of hands-on data engineering experience in enterprise environments. Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred). Advanced proficiency in Apache Spark with Python (PySpark). Strong command over SQL, query optimization, and performance tuning. Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration. Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution). Strong experience in data modelling (normalized, dimensional, lakehouse modelling). Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar). Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns. Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures. Strong analytical, problem-solving, and communication skills. Ability to collaborate with cross-functional teams and lead technical discussions.

Skills: Java,Python,Spark

Notice period - 0 to 15days only

Job stability is mandatory

Additional Guidelines

Interview process:- 2 Technical round + 1 Client round 3 days in office, Hybrid model.

==============================================================================================

What You Need

  • 8
  • years of hands-on data engineering experience in enterprise environments.
  • Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
  • Advanced proficiency in Apache Spark with Python (PySpark).
  • Strong command over SQL, query optimization, and performance tuning.
  • Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
  • Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
  • Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
  • Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
  • Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
  • Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
  • Strong analytical, problem-solving, and communication skills.
  • Ability to collaborate with cross-functional teams and lead technical discussions.

Preferred Skills

  • Experience with CI/CD tools such as Azure DevOps and Git.
  • Familiarity with IaC tools (Terraform, ARM).
  • Exposure to data governance and cataloging tools (Azure Purview).
  • Experience supporting machine learning or BI workloads on Databricks.

Responsibilities

  • Interview process:- 2 Technical round + 1 Client round 3 days in office, Hybrid model

Qualifications

  • Required Citizenship / Work Permit / Visa Status
  • US Citizens, green card holder
  • 8+ years of hands-on data engineering experience in enterprise environments
  • Advanced proficiency in Apache Spark with Python (PySpark). Strong command over SQL, query optimization, and performance tuning
  • Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration
  • Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution)
  • Strong experience in data modelling (normalized, dimensional, lakehouse modelling)
  • Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar)
  • Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns
  • Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures
  • Strong analytical, problem-solving, and communication skills
  • Ability to collaborate with cross-functional teams and lead technical discussions
  • Notice period - 0 to 15days only
  • 8
  • years of hands-on data engineering experience in enterprise environments
  • Advanced proficiency in Apache Spark with Python (PySpark)
  • Strong command over SQL, query optimization, and performance tuning
  • Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration
  • Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution)
  • Strong experience in data modelling (normalized, dimensional, lakehouse modelling)
  • Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar)
  • Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns
  • Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures
  • Strong analytical, problem-solving, and communication skills
  • Ability to collaborate with cross-functional teams and lead technical discussions

Stand out from other applicants

AI reads this job description and tailors your resume to match, optimized for ATS filters.

Similar jobs

Ready to land your next role?

Join thousands of professionals who use Mokaru to manage their job search. AI-powered resume tailoring, application tracking, and more.

Create Free Resume