Solera

Solera

Principal Software Engineer

Company

Solera

Role

Principal Software Engineer

Location

India

Job type

Full time

🔥

Posted

2 hours ago

Salary

Not disclosed by employer

Job description

UPDATED JD

Principal Software Engineer — AI Platform & Modernization

Location: Solera India, Bangalore (On-site | No travel)

About Solera

Solera is a global leader in data and software solutions that manage and protect life’s most important assets—cars, homes, and identities. We deliver innovative platforms that drive operational efficiency and exceptional customer experiences.

Role Summary

We are seeking a hands-on Principal Software Engineer to lead architecture and delivery of large-scale systems with a special focus on embedding AI into both product features and the software development lifecycle (SDLC). You will be Solera’s AI engineering expert, driving AI-assisted modernization (e.g., Perl/.NET → Java microservices), Retrieval-Augmented Generation (RAG) solutions, model-driven capabilities, and productivity accelerators using LLMs. This is a senior, on-site role in Bangalore that combines technical leadership with active coding.

What You Will Do

Architecture & Delivery

Define and evolve architecture for scalable, resilient, secure microservices and data platforms.

Ship critical components yourself (reference implementations, performance-sensitive services).

Own API contracts, service decomposition, data models, and integration patterns (REST; SOAP where legacy requires).

AI Integration & Enablement

Embed AI in product and SDLC:

AI-assisted code conversion/refactoring (e.g., Perl/.NET → Java).

Automated test generation (unit, contract, regression) and AI-driven code reviews.

Performance analysis and tuning with AI-based profiling/insights.

Build guardrails: prompt patterns, safety filters/PII masking, evaluation harnesses, and offline prompt regression suites.

Partner with data science to productize models (retrieval, orchestration, monitoring).

RAG & Model-Driven Capabilities

Architect and implement RAG systems for enterprise use cases (intelligent search, knowledge assistants, decision support, developer tooling).

Own vector pipelines—chunking, embedding, indexing, hybrid retrieval (vector + keyword/metadata), and re-ranking.

Build, fine-tune, or adapt ML/NLP models where appropriate (classification, extraction, embeddings, LLM adaptation)—and translate them into reliable, scalable services.

Production-Grade AI Systems

Design for latency, throughput, availability, and cost controls (token budgets, caching, batching).

Implement model and prompt versioning, canary/A-B rollouts, and rollback strategies.

Define and monitor accuracy/relevance, drift (data/prompt), and observability (cost, latency, usage).

Modernization & Maintenance

Lead a systematic modernization of legacy modules (Perl, older .NET/Java) into cloud-native services with measurable risk and debt reduction.

Oversee and stabilize Windows services where applicable; chart migration paths to containers/orchestration.

Leadership & Collaboration

Act as subject matter expert for AI and platform engineering; mentor senior engineers.

Collaborate with Product, Security, Data Science, and DevOps to turn requirements into executable designs and pragmatic milestones.

Raise the bar on code quality, security (OWASP, OAuth2/OIDC), and performance via reviews and technical standards.

Minimum Qualifications (Must-Haves)

15+ years of software engineering experience building and operating enterprise systems; 5+ years owning architecture for distributed/microservices platforms.

Production AI engineering: integrated LLMs/ML into shipped features AND the SDLC (e.g., AI-assisted refactoring, test generation, defect triage, performance tuning).

RAG expertise: embeddings, vector search, hybrid retrieval, pipeline design, and evaluation.

Languages/Frameworks:

Java (Spring Boot) and C#/.NET (ASP.NET/Core)—enterprise-grade systems.

Perl—hands-on experience maintaining/refactoring production systems.

Python—scripting/automation and model tooling.

APIs & Frontend collaboration: Expert in REST; working knowledge of SOAP; familiarity with React/Angular to align backend/frontend integration.

Data: SQL Server/MySQL/PostgreSQL; MongoDB; Elasticsearch/OpenSearch; ORM (Hibernate/JPA/Entity Framework).

Cloud & Infra (AWS): Designing/deploying cloud-native systems using services such as Lambda, ECS/EKS, API Gateway, S3, SQS/SNS/EventBridge, RDS, DynamoDB/OpenSearch.

Containers & CI/CD: Docker, Kubernetes; CI/CD with GitHub Actions/Jenkins; IaC with Terraform/CloudFormation.

Security & Observability: OAuth2/OIDC, secrets management, logging/metrics/tracing, cost and performance monitoring.

Communication & Leadership: Mentors senior engineers; explains complex AI trade-offs to technical and non-technical stakeholders.

Education: Bachelor’s in Computer Science/Engineering or related (Master’s preferred).

Preferred Qualifications (Plus Factors)

Master’s/Ph.D. in CS/AI/ML; publications or patents.

Experience with LLM fine-tuning or instruction tuning, and model observability/drift detection.

Hands-on with LangChain or Semantic Kernel, vector stores (OpenSearch/Elasticsearch, FAISS, Pinecone).

Experience leading large-scale modernization programs across polyglot stacks.

Open-source contributions, conference talks, or recognized thought leadership.

Background in privacy-first engineering and regulated data environments.

AI Capabilities You Will Own (Deep Dive)

LLM Integration & Orchestration: prompt engineering, tool/function calling, context management, grounding with RAG.

RAG Architecture: ingestion/chunking strategies, embedding models, vector indexing, hybrid retrieval, re-ranking, freshness/re-indexing policies.

Model Lifecycle: selection, adaptation, packaging, inference optimization (batching, caching, ONNX/TensorRT where applicable), and SLO-driven operations.

Responsible AI & Security: PII redaction, prompt-injection defense, data minimization, auditability, rate-limiting/quotas, and governance.

AI for Developer Productivity: code conversion/refactoring, test generation with coverage thresholds, defect clustering, changelog summarization, architectural blueprints.

Provider-Agnostic Delivery: experience with Azure OpenAI/OpenAI/Bedrock or equivalent; ability to evaluate cost/latency/quality trade-offs.

Tech Stack

Back-end: Java (Spring Boot), C#/.NET, Perl (modernization), Python
Frontend : React or Angular
Data: SQL Server, MySQL, PostgreSQL; MongoDB; Elasticsearch/OpenSearch
Cloud/Infra: AWS (Lambda, ECS/EKS, API Gateway, S3, SQS/SNS/EventBridge, RDS, DynamoDB/OpenSearch), Docker, Kubernetes, Terraform/CloudFormation, GitHub Actions/Jenkins
AI/ML: LLM providers (Azure OpenAI/OpenAI/Bedrock), LangChain/Semantic Kernel, vector stores (OpenSearch/Elasticsearch/FAISS/Pinecone), ONNX/TensorRT (where applicable)

Resume ExampleCover Letter Example

Explore more

Similar jobs