Find Your Dream Aws Job in India

Explore the latest Aws job openings in India. Discover top companies hiring for Aws roles across major cities in India and take the next step in your career.

search for jobs
google-jobsmeta-jobsamazon-jobsmicrosoft-jobsibm-jobsapple-jobsnvidia-jobssony-jobsfacebook-jobsinstagram-jobslinkedin-jobssnapchat-jobstik-tok-jobsslack-jobspinterest-jobsfigma-jobsmastercard-jobsvisa-jobstesla-jobstencent-jobsstarbucks-jobssamsung-jobsintel-jobsgoogle-jobsmeta-jobsamazon-jobsmicrosoft-jobsibm-jobsapple-jobsnvidia-jobssony-jobsfacebook-jobsinstagram-jobslinkedin-jobssnapchat-jobstik-tok-jobsslack-jobspinterest-jobsfigma-jobsmastercard-jobsvisa-jobstesla-jobstencent-jobsstarbucks-jobssamsung-jobsintel-jobs

AWS Engineer

Role Description

This is a full-time on-site role for an AWS Engineer located in Trivandrum. The AWS Engineer will be responsible for software development, infrastructure management, cloud computing, Linux administration, and database maintenance.

Must Have

-Minimum 5 years of hands-on experience In the AWS cloud at least with S3, EC2, MSK, Glue, DMS and Sage maker.

-Bachelor's degree in Computer Science or related field should have Development/work experience in Python, Docker & containerizing.

-Should be troubleshooting the problem, reviewing the design, and coding the solution.

- AWS-certified candidate is preferred

Qualifications

  • Software Development skills
  • Infrastructure and Cloud Computing expertise
  • Linux and Database administration experience
  • Strong problem-solving and analytical skills
  • AWS certification is a plus
company icon

Fincita Consulting Inc

calendar icon

Today

Java Developer with AWS/Azure (6+ years only)

  • 6+ years of professional experience
  • Experience developing microservices and cloud native apps using Java/J2EE, REST APIs, Spring Core, Spring MVC Framework, Spring Boot Framework JPA (Java Persistence API) (Or any other ORM), Spring Security and similar tech stacks (Open source and proprietary)
  • Experience working with Unit testing using framework such as Junit, Mockito, JBehave
  • Build and deploy services using Gradle, Maven, Jenkins etc. as part of CI/CD process
  • Experience working in Cloud Platform (AWS/Azure/GCP)
  • Experience with any Relational Database (Oracle, PostgreSQL etc.)

company icon

Cognizant

calendar icon

Today

Java + AWS Developer

Location: Any metropolitan city

Experience: 6+ years

Key Focus: Java 11, Java 12, Microservices, Event-Driven Architecture, AWS, Kubernetes

About Us

MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better.

We are looking for a Senior Java Developer (AWS-DevOps cloud experience) to join our team, enabling our customer success.

Job Summary:

We are looking for a highly skilled Senior Backend Engineer to design, build, and deploy scalable, cloud-native microservices using Java 11, Java 21, Spring Boot, and AWS. The ideal candidate will have strong expertise in event-driven architecture, infrastructure-as-code (Terraform), and CI/CD automation while ensuring high code quality through rigorous testing and best practices.

Key Responsibilities:

Design & Development:

  • Architect, develop, and maintain highly scalable microservices using Java 11, Java 21, and Spring Boot.
  • Implement event-driven systems using AWS SNS, SQS, and Lambda.
  • Ensure clean, modular, and testable code with proper design patterns and architectural principles.

Cloud & DevOps:

  • Deploy applications using Docker, Kubernetes, and Helm on AWS.
  • Manage Infrastructure as Code (IaC) with Terraform.
  • Monitor systems using Grafana, Prometheus, Kibana, and Sensu.

Must-Have:

  • 6+ years of hands-on JVM backend development (Java 11 and Java 21).
  • Expertise in Spring Boot, Spring Cloud, and Hibernate.
  • Strong experience with AWS (SNS, SQS, Lambda, S3, CloudFront) + Terraform (IaC).
  • Microservices & Event-Driven Architecture design and implementation.
  • Test automation (JUnit 5, Mockito, WireMock) and CI/CD pipelines (Jenkins, Kubernetes).
  • Database proficiency: PostgreSQL, DynamoDB, MongoDB, Redis.
  • Containerization & Orchestration: Docker, Kubernetes, Helm.
  • Monitoring & Logging: Grafana, Prometheus, Kibana.
  • Fluent English & strong communication skills.

company icon

MyRemoteTeam Inc

calendar icon

Today

AWS ARCHITECT

Exp : 8yrs to 12yrs

  • Certifications:
  • AWS Certified (preferably by Amazon)
  • Technical Skills:
  • Deep experience with DevOps tools and AWS networking
  • Technical design experience in DevOps practices
  • Hands-on experience with Jira, reviewing technical designs/changes
  • Ability to collaborate across platforms in a hybrid cloud environment
  • Can look for candidates such as AWS Architect/DevOps Engineer / Expert

company icon

Live Connections

calendar icon

Today

Python AWS Developer

Location: Any metroplitan city

Experience: 6+ Years

Key Focus: Python, PostgreSQL, FastAPI, DevOps, CI/CD, AWS, Kubernetes, and Terraform.

About Us

MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better.

We are looking for a Senior Python AWS Developer to join our team, enabling our customer success.

Key Responsibilities

  • Participate in solution investigation, estimations, planning, and alignment with other teams; Design, implement, and deliver new features for the Personalization Engine
  • Partner with the product and design teams to understand user needs and translate them into high-quality content solutions and features.
  • Promote and implement test automation (e.g, unit tests, integration tests)
  • Build and maintain CI/CD pipelines for continuous integration, development, testing, and deployment.
  • Deploy applications on the cloud using technologies such as Docker, Kubernetes, AWS, and Terraform.
  • Work closely with the team in an agile and collaborative environment. This will involve code reviews, pair programming, knowledge sharing, and incident coordination.
  • Maintain existing applications and reduce technical debt.

Qualifications Must have:

  • 6+ years of experience in software development is preferred
  • Experience with Python
  • Experience with PostgreSQL
  • Good understanding of data structures and clean code
  • Able to understand and apply design patterns
  • You are interested in DevOps philosophy
  • Experience with FastAPI
  • Willing to learn on the job
  • Experience with relational and non-relational databases
  • Empathetic and able to easily build relationships
  • Good verbal and written communication skills

company icon

MyRemoteTeam Inc

calendar icon

Today

AWS, Redshift

TCS HIRING

ROLE: AWS, Redshift, Spotfire

YEAR OF EXP: 8 + YEARS

LOCATION: PAN INDIA

AWS, Redshift, Spotfire

This role focuses on leveraging AWS Redshift to develop, manage, and optimize data warehousing solutions. You will work closely with data scientists, analysts, and IT teams to ensure high performance and availability of data for business insights. Also able to report the analytics via Spotfire reports

Responsibilities

  • Design, deploy, and manage AWS Redshift clusters.
  • Optimize query performance and resource management within Redshift.
  • Implement and manage data security and compliance protocols.
  • Monitor and troubleshoot data warehousing processes.
  • Collaborate with data engineering and analytics teams to define requirements.
  • Develop and maintain ETL pipelines for data integration.
  • Conduct regular backup and recovery processes.

Requirements

  • Proven experience in managing AWS Redshift environments.
  • Strong background in data warehousing and database management.
  • Reporting experience using TIBCO Spotfire
  • Experience with ETL processes and data integration.
  • Understanding of cloud architecture and services.
  • Excellent problem-solving and analytical skills.

company icon

Tata Consultancy Services

calendar icon

Today

Aws + NodeJS Developer

Must-Have Node JS, Microservices/Rest/SOAP services, AWS-Lambda

Good-to-Have AWS Serverless, CloudFormation, Terraforms, SQL, No-SQL

Exp Range: 5 TO 10

Location: Chennai, Pune & Indore

Interview Type: Weekday Virtual Drive

Date: 25-Jun-2025

Day: Wednesday

company icon

Tata Consultancy Services

calendar icon

Today

Amazon Web Service(AWS), Devops_PAN INDIA

Greetings from TCS !

TCS Hiring for Amazon Web Service(AWS), DevOps, Kubernetes, Terraform

Job Location: Chennai

Experience Range: 8-12 Years

Job Description :

  • Maintains in depth knowledge of the AWS DevOps cloud platforms, provides detailed advice regarding their application, and executes specialized tasks
  • Core experience in AWS, CI experience (Git, Jenkins, GitLab), Bash, PowerShell, Build automation, Container experience in Docker, Aws DevOps, CKA and CKAD Certifications
  • Knowledge to worked extensively on CI image building with both Linux and Windows containers
  • Should have the best standards knowledge on CI Image building process for both Linux and windows containers
  • Significant experience with SaaS and web-based technologies
  • Skilled with Continuous Integration and Continuous Deployments using AWS Devops Services.
  • Skilled to automate Python, or Bash is an added advantage.
  • Skilled with containerization platforms using Docker & Kubernetes.
  • Familiar with architecture/design patterns and re-usability concepts.
  • Skilled in SOLID design principles and TDD.
  • Familiar with Application Security via OWASP Top 10 and common mitigation strategies.
  • Detailed knowledge of database design and object/relational database technology.
  • Good experience in MS Fabric
  • AWS DevOps Implementation:
  • Lead the design and implementation of CI/CD pipelines using AWS DevOps.
  • Configure and manage build agents, release pipelines, and deployment environments in AWS DevOps.
  • Establish and maintain robust CI processes to automate code builds, testing, and deployment.
  • Integrate automated testing into CI pipelines for comprehensive code validation.
  • Continuous Integration:
  • Infrastructure as Code (IaC) -Terraform

Utilize Infrastructure as Code principles to manage and provision infrastructure components on AWS

  • Implement and maintain IaC templates
  • Monitoring and Optimization:
  • Implement monitoring and logging solutions to track the performance and reliability of CI/CD pipelines.
  • Continuously optimize CI/CD processes for efficiency, speed, and resource utilization.
  • Security and Compliance
  • Implement security best practices within CI/CD pipelines.
  • Ensure compliance with industry standards and regulatory requirements in CI/CD processes.
  • Troubleshooting and Support
  • Provide expert-level support for CI/CD-related issues.

Troubleshoot and resolve build and deployment failures promptly

company icon

Tata Consultancy Services

calendar icon

Today

Hiring for AWS devOps (EKS)

Dear Tech Professional

Greetings from Tata Consultancy Services (TCS)

TCS has always been in the spotlight for being adept in "the next big technologies". What we can offer you is a space to explore varied technologies and quench your techie soul.

What we are looking for : AWS DevOps Engineer

Location : Bangalore, Chennai, Hyderabad, Pune, Bhubaneshwar, Kochi

Exp -6-14 Years

Interview Mode : Virtual mode (Microsoft Teams)

  • In-depth and working knowledge on AWS/Azure Cloud Services through the Infra as a code using infrastructure provisioning and Configuring through code(Terraform) for Account build, Networking, Storage, Compute, Audit & Config services, End Points, Backup and Identity Access Management.
  • Good problem-solving skills and understanding of microservices based architecture
  • AWS fundamentals and hands on experience in AWS managed services especially K8.
  • Must have hands on exp in Terraform, Docker, Jenkins, Kubernetes
  • Resource will be aligned to the Ansible workstream of Gen2.0 program, so previous experience working with the Ansible Automation Platform will be needed
  • Good at Console and the AWS CLI and APIs
  • Experience in Tools like Jenkins, Terraform
  • Exposure on Server spec tools for build and provisioning infrastructure in AWS using Jenkins
  • Develop and maintain cloud infrastructure as code and provision AWS environments/accounts for our consumers
  • Develop and implement solution for monitoring the health and availability of services including fault detection, alerting, and recovery
  • Good communication skills
  • Experience in SDLC Process, Change & Release process is essential

Regards,

Prashaanthini

company icon

Tata Consultancy Services

calendar icon

Today

Java Software Engineer - React - AWS - Devops

Role: Java Developer - Software Engineer

Experience: 4-9 Years

Location: Chennai (HYBRID)

Interview: F2F

Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage)

Job Description:

Overall 4+ years of experience in Java Development Projects

3+Years of development experience in development with React

2+Years Of experience in AWS Cloud, Devops.

Microservices development using Spring Boot

Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka

Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA

Experience in event-driven architectures (CQRS and SAGA patterns).

Experience in Design patterns

Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana

Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+

company icon

Talentgigs

calendar icon

Today

Lead Software Engineer - Java / AWS

Ideal Candidature:

1: Be a part of IOT Product portfolio and execute towards Digital Transformational initiatives. Prepare design documents in collaboration with product managers and engineering squads in development of use cases for new features. Hands on product lead developer expertise in designing solutions running on hybrid cloud environments.

2: Work as a Software Lead in application development using Java, JavaScript, Python, SQL and other latest technologies running on AWS environments. Drive Engineering activities in Microservices and Cloud based Architecture by leveraging DevOps efficiencies and adopting new technology stack in AWS. Drive communication and consistently report accurate

product status for stakeholders.

3: Able to lead a team of engineers, help them with technical issues. (80% self-work and 20% influencing scrum engineers). Balance time on development projects including Technical Design, Code Reviews, Mentoring, and training. Able to break down requirements and build traceability in design and implementation details. Work with developers to define unit & automated tests and closely monitor development milestones. Collaborate with scrum team to identify functional, system and end to end integration of products leading to deployment.

4: Understand end to end flow in product development and able to prepare design documents and present to Engineering and Product Leadership team. Full stack product development experience with AngularJS, JavaScript, NodeJS would be a big plus

company icon

Dover India

calendar icon

Today

AWS Engineer

ABOUT US:

Founded in 2016, DataZymes is a next-generation analytics and data science company driving technology and digital-led innovation for our clients, thus helping them get more value from their data and analytics investments. Our platforms are built on best-of-breed technologies, thus protecting current investments while providing clients more bang for their buck. As we are a premier partner for many Business Intelligence and Information Management companies, we also provide advisory and consulting services to clients helping them make the right decisions and put together a long-term roadmap.

Our mission at DataZymes is to scale analytics and enable healthcare organizations in achieving non-linear, long term and sustainable growth. In a short span, we have built a high-performance team in focused practice areas, built digital-enabled solutions, and are working with some marquee names in the US healthcare industry.

JOB LOCATION: Bangalore

QUALIFICATION REQUIRED: Bachelor's or master's degree in computer science, Information Technology; Experience with batch job scheduling and identifying data/job dependencies.

EXPERIENCE REQUIRED: 4-8 years hands on experience

EMPLOYMENT TYPE: Full-Time

ROLES AND RESPONSIBILITIES:

  • Design and implement data pipelines using AWS services such as S3, Glue, PySpark and EMR.
  • Develop and maintain data processing and transformation scripts using Python and SQL.
  • Optimize data storage and retrieval using AWS database services such as RDS, Redshift and DynamoDB.
  • Build different types of data warehousing layers based on specific use cases.
  • Utilize expertise in SQL and have a strong understanding of ETL and data modelling.
  • Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business's analytics and reporting.
  • Experience with AWS cloud and AWS services such as S3 Buckets, Glue Studio, Redshift, Athena, Lambda, and SQS queues

COMPETENCIES:

  • Proficiency in data warehousing, ETL, and big data processing.
  • Familiarity with the Pharma domain is a plus.
company icon

DataZymes

calendar icon

Today

Amazon Web Service(AWS), Devops_PAN INDIA

Greetings from TCS !

TCS Hiring for Amazon Web Service(AWS), DevOps, Kubernetes, Terraform

Job Location: Chennai

Experience Range: 8-12 Years

Job Description :

  • Maintains in depth knowledge of the AWS DevOps cloud platforms, provides detailed advice regarding their application, and executes specialized tasks
  • Core experience in AWS, CI experience (Git, Jenkins, GitLab), Bash, PowerShell, Build automation, Container experience in Docker, Aws DevOps, CKA and CKAD Certifications
  • Knowledge to worked extensively on CI image building with both Linux and Windows containers
  • Should have the best standards knowledge on CI Image building process for both Linux and windows containers
  • Significant experience with SaaS and web-based technologies
  • Skilled with Continuous Integration and Continuous Deployments using AWS Devops Services.
  • Skilled to automate Python, or Bash is an added advantage.
  • Skilled with containerization platforms using Docker & Kubernetes.
  • Familiar with architecture/design patterns and re-usability concepts.
  • Skilled in SOLID design principles and TDD.
  • Familiar with Application Security via OWASP Top 10 and common mitigation strategies.
  • Detailed knowledge of database design and object/relational database technology.
  • Good experience in MS Fabric
  • AWS DevOps Implementation:
  • Lead the design and implementation of CI/CD pipelines using AWS DevOps.
  • Configure and manage build agents, release pipelines, and deployment environments in AWS DevOps.
  • Establish and maintain robust CI processes to automate code builds, testing, and deployment.
  • Integrate automated testing into CI pipelines for comprehensive code validation.
  • Continuous Integration:
  • Infrastructure as Code (IaC) -Terraform

Utilize Infrastructure as Code principles to manage and provision infrastructure components on AWS

  • Implement and maintain IaC templates
  • Monitoring and Optimization:
  • Implement monitoring and logging solutions to track the performance and reliability of CI/CD pipelines.
  • Continuously optimize CI/CD processes for efficiency, speed, and resource utilization.
  • Security and Compliance
  • Implement security best practices within CI/CD pipelines.
  • Ensure compliance with industry standards and regulatory requirements in CI/CD processes.
  • Troubleshooting and Support
  • Provide expert-level support for CI/CD-related issues.

Troubleshoot and resolve build and deployment failures promptly

company icon

Tata Consultancy Services

calendar icon

Today

Aws + NodeJS Developer

Must-Have Node JS, Microservices/Rest/SOAP services, AWS-Lambda

Good-to-Have AWS Serverless, CloudFormation, Terraforms, SQL, No-SQL

Exp Range: 5 TO 10

Location: Chennai, Pune & Indore

Interview Type: Weekday Virtual Drive

Date: 25-Jun-2025

Day: Wednesday

company icon

Tata Consultancy Services

calendar icon

Today

AWS Engineer

ABOUT US:

Founded in 2016, DataZymes is a next-generation analytics and data science company driving technology and digital-led innovation for our clients, thus helping them get more value from their data and analytics investments. Our platforms are built on best-of-breed technologies, thus protecting current investments while providing clients more bang for their buck. As we are a premier partner for many Business Intelligence and Information Management companies, we also provide advisory and consulting services to clients helping them make the right decisions and put together a long-term roadmap.

Our mission at DataZymes is to scale analytics and enable healthcare organizations in achieving non-linear, long term and sustainable growth. In a short span, we have built a high-performance team in focused practice areas, built digital-enabled solutions, and are working with some marquee names in the US healthcare industry.

JOB LOCATION: Bangalore

QUALIFICATION REQUIRED: Bachelor's or master's degree in computer science, Information Technology; Experience with batch job scheduling and identifying data/job dependencies.

EXPERIENCE REQUIRED: 4-8 years hands on experience

EMPLOYMENT TYPE: Full-Time

ROLES AND RESPONSIBILITIES:

  • Design and implement data pipelines using AWS services such as S3, Glue, PySpark and EMR.
  • Develop and maintain data processing and transformation scripts using Python and SQL.
  • Optimize data storage and retrieval using AWS database services such as RDS, Redshift and DynamoDB.
  • Build different types of data warehousing layers based on specific use cases.
  • Utilize expertise in SQL and have a strong understanding of ETL and data modelling.
  • Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business's analytics and reporting.
  • Experience with AWS cloud and AWS services such as S3 Buckets, Glue Studio, Redshift, Athena, Lambda, and SQS queues

COMPETENCIES:

  • Proficiency in data warehousing, ETL, and big data processing.
  • Familiarity with the Pharma domain is a plus.
company icon

DataZymes

calendar icon

Today

Java Developer with AWS/Azure (6+ years only)

  • 6+ years of professional experience
  • Experience developing microservices and cloud native apps using Java/J2EE, REST APIs, Spring Core, Spring MVC Framework, Spring Boot Framework JPA (Java Persistence API) (Or any other ORM), Spring Security and similar tech stacks (Open source and proprietary)
  • Experience working with Unit testing using framework such as Junit, Mockito, JBehave
  • Build and deploy services using Gradle, Maven, Jenkins etc. as part of CI/CD process
  • Experience working in Cloud Platform (AWS/Azure/GCP)
  • Experience with any Relational Database (Oracle, PostgreSQL etc.)

company icon

Cognizant

calendar icon

Today

Cyber Security Engineer-I (IAM, Sailpoint, AWS)

Hybrid mode (Mandatory 3days WFO)

The Opportunity

"The Security Engineer is a highly visible and critical role, collaborating on complex cloud and corporate service edge protection technologies and oversight. With your proven history of technical knowledge of identity and access management systems and services you will be working on a variety of different challenges facing the organization. You will provide both guidance and direct input to help ensure a secure, well-protected environment that complies with all applicable security standards". - Director, Cyber Security.

What We're Looking For:

  • 2-4 years of hands-on experience with SailPoint (ISC or IIQ).
  • At least 2 years of AWS IAM expertise.
  • AWS Security Specialty certification strongly preferred.
  • Solid understanding of identity protocols and standards (OIDC, SAML, API tokens).
  • Proficiency in scripting (Python, PowerShell, Bash).
  • Experience working with JSON and API testing tools (e.g., Postman), plus familiarity with Java or Beanshell scripting.

What You'll Do:

  • Implement and enhance identity security tools, including IGA, ITDR, and NHI platforms.
  • Automate testing, policy/posture reporting, and deployment of IAM solutions via pipelines & IaC.
  • Provide engineering support for integrating security tools and APIs within the IAM program.
  • Collaborate on design, architecture reviews, security testing, and process improvements.
  • Stay current on emerging technologies, support compliance efforts, and advise on best practices.

Why Join Us:

  • Supportive culture that values ownership, customer delight, and mutual respect.
  • Opportunities to make an impact and grow via hands-on learning and mentorship.
  • Competitive pay, benefits, and a focus on work life balance through hybrid work and community events.
company icon

FICO

calendar icon

Today

.NET Core, Web Api, MVC, Web Services, SQL, AWS

TCS present an excellent opportunity for .NET Core, Web Api, MVC, Web Services, SQL, AWS

Job Location: Trivandrum

Experience required : 6-12 Yrs

Skills: .NET

Walk in Interview :28th June (Saturday)

Time: 9:00 AM - 3:00 PM (Registration window)

Venue: TCS Peepul Park, TCS Peepul Park Rd, Technopark Campus, Thiruvananthapuram, Kerala 695581

Work Location: Trivandrum

Must-Have

  • ASP.NET,
  • C#.net,
  • HTML, CSS,
  • Rest API
  • SQL
  • Cloud DB's

Good-to-Have

  • .Couchbase
  • .Kafka
  • .Oracle

If you are Interested and available to attend, Kindly share your updated resume to with subject line "Walk_In_Interview-.NET Core, Web Api, MVC, Web Services, SQL, AWS " along with following details:

Full Name:

Contact Number:

Current Location:

Total Experience:

Relevant Exp in Microservice:

Current company:

Notice period:

Any career/ Educational gap: Y/N (If yes, specify year and reason for the gap)

Available for Walk-in: Yes/No

Looking forward to your response

company icon

Tata Consultancy Services

calendar icon

Today

Network Firewall Administrator(AWS)

Network + Firewall Admin (AWS)

  • Overall Network and Firewall experience - 8 to 10 years
  • AWS Cloud environment- 3+ working experience.
  • Ready to work in 24x7 rotational shifts.

  • Preferred to be CCNA certified or at least CCNA Trained.
  • Preferred certifications on Firewall (Fortigate or any)
  • Any AWS cloud Certification is an added advantage.
  • Minimum 4 - 6 years of experience in Network and Firewalls Administration
  • Good knowledge in Networking, LAN, WAN, VLAN, etc., concepts.
  • Good knowledge on FortiGate Firewalls or equivalent firewall types.
  • Experience in last mile access network and network elements, routers, firewalls.
  • Troubleshooting firewall, switches, routers, VPN, etc., errors.
  • Experience on AWS Networks is added advantage.
  • Reviewing error logs and user-reported errors.
  • Should have problem-solving skills and experience in IP networking and static routing, SSH, SM TP, DNS, HTTP/S, DHCP.
  • Respond to alerts and fix them permanently.
  • Working on RCAs and permanent fixes.
  • Willingness to work in 24x7 rotational shifts.
  • Willingness to be part of on call team.
  • Should have 3+ yrs. of working experience in AWS cloud environment.

company icon

Tata Consultancy Services

calendar icon

Today

Full Stack Developer - PHP, AWS SES, SMTP, PowerMTA (Email Infrastructure)

About the Role

We're looking for a passionate Full Stack Developer with a strong grasp of both frontend and backend technologies and deep experience in high volume email infrastructure. You will build scalable systems, fine tune email deliverability, and continuously innovate in a fast moving environment.

Experience: 2-5 Years

Key Responsibilities
  • Design, develop, and scale robust web applications using PHP, Node.js, Express.js, and React.js.
  • Work with both relational (MySQL) and non relational (MongoDB) databases.
  • Architect and maintain email infrastructure leveraging SMTP servers, AWS SES, and PowerMTA.
  • Containerize applications with Docker and orchestrate multi service environments (e.g., Docker Compose or Kubernetes).
  • Implement and manage CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) to automate testing, builds, and deployments.
  • Write clean, modular, and maintainable code following best practices and coding standards.
  • Optimize applications for maximum speed, reliability, and scalability.
  • Collaborate with cross functional teams to design, develop, and launch new features.
  • Stay current with emerging technologies and proactively propose technical improvements.

Required Skills
  • CategoryTechnologies & Tools
  • FrontendReact.js, JavaScript (ES6+), HTML5, CSS3
  • BackendPHP, Node.js, Express.js
  • DatabasesMySQL, MongoDBEmail InfrastructureSMTP server management, AWS SES,
  • PowerMTA configuration & troubleshooting; prior experience in high volume email delivery
  • DevOps & HostingDocker, containerization, server configuration, AWS, Linux CLI
  • CI/CDExperience building and maintaining pipelines with GitHub Actions, Jenkins, GitLab CI/CD, or similar
  • Other ToolsPostfix, SendGrid, RESTful APIs, webhooks, Docker Compose, Kubernetes (nice to have)

What We're Looking For
  • A passion for technology, continuous learning, and clean code.
  • Ownership mentality with a strong sense of responsibility.
  • A collaborative team player with a knack for creative problem solving.
  • Ability to balance development speed with product quality.
  • Proven experience deploying and operating containerized applications in production.

Why Join Us
  • Work with cutting edge technologies and modern development practices.
  • Growth focused, learning friendly culture that values innovation.
  • Balanced, flexible work environment with a supportive team.

company icon

Manhattan Tech Ventures

calendar icon

Today

AWS & Low Latency Infrastructure Engineer (AWS / Crypto Trading)

Role:

AWS & Low Latency Infrastructure Engineer

Location:

Gurugram, India (on-site)

Description:

In a domain where microseconds matter and cloud-native edge counts, tensorfox is building systems to compete head-to-head with high-frequency trading firms - but on crypto rails. We are seeking a deeply technical infrastructure engineer who knows AWS inside out and has lived and breathed low latency. You'll design and maintain the foundational systems that allow us to see, act, and win faster than the competition - across global markets and ephemeral network conditions.

While the core focus is latency and performance, you'll also own key aspects of DevOps: automation, observability, and reliable deploy pipelines. This isn't about managing dashboards - it's about engineering systems that deliver speed without fragility.

The Mission

  • Architect and operate ultra-low latency infrastructure optimized for AWS-based crypto exchanges (e.g., Binance).
  • Benchmark and reduce every millisecond across the stack - network hops, instance placement, serialization, and code execution.
  • Own AWS infrastructure: design for resilience, tune for performance, automate for scale.
  • Set up and manage CI/CD pipelines, infrastructure-as-code, and system observability.
  • Collaborate with trading and engineering teams to deploy and monitor high-performance market data and order execution pipelines.
  • Build observability systems that don't just monitor - they help predict and prevent latency degradation.
  • Stay current on the evolving cloud exchange landscape and proactively adapt systems to maintain a competitive edge.

The Skills and Qualifications

  • Deep experience with AWS networking, compute, and systems internals - EC2, ENA, placement groups, Nitro, latency tuning.
  • Strong grasp of Linux performance tuning, networking (TCP/UDP, DNS, BGP), and low-level debugging tools (e.g., perf, tcpdump, strace).
  • Hands-on experience optimizing for low latency in a trading, gaming, or real-time systems environment.
  • Comfortable with infrastructure automation - Terraform, Ansible, or similar.
  • Experience building and maintaining CI/CD workflows and monitoring infrastructure.
  • Programming proficiency for infrastructure tooling (Python preferred).
  • Bonus: Familiarity with Rust or C++ in performance-critical systems.
  • Bonus: Exposure to crypto exchanges, market data APIs, or cloud-based trading infra.
  • Bonus: Experience with kernel bypass (DPDK), FPGA, or other high-performance network stacks.

The Compensation

A starting salary of $40,000 - 50,000 USD per year (depending on experience). For the right infrastructure engineer - latency-aware, automation-savvy, and cloud-native - this is a high-impact foundational role. Equity and/or performance-linked upside may be available for candidates who can demonstrably move the needle on latency and execution speed.

company icon

tensorfox

calendar icon

Today

Sr. Software Engineer (AWS) - Backend

About Zeller

At Zeller, we're champions for businesses of all sizes, and proud to be a fast-growing Australian scale-up taking on the ambitious goal of reimagining business banking and payments.

We believe in a level playing field, where all businesses benefit from access to smarter payments and financial services solutions that accelerate their cash flow, help them get paid faster, and give them a better understanding of their finances. So we're hard at work building the tools to make it happen.

Zeller is growing fast, backed by leading VCs, and brings together a global team of passionate payment and tech industry professionals. With an exciting roadmap of innovative new products under development, we are building a high performing team to take on the outdated banking solutions. If you are passionate about innovation, thrive in fast-paced environments, embrace a challenge, hate bureaucracy, and can't think of anything more exciting than disrupting the status-quo, then read on to learn more.

Role , Responsibilities and experiences:

  • Analytical and be able to work with fuzzy requirements
  • Methodologically translate discussions with stakeholders, documents, own researchfindings into technical designs and implementation stepsPrior
  • Experience in handling a team of software engineers.Build to last and go production mindset versus build as proof-of-concept
  • Strong background in softwareengineering and design patterns Experience in microservices and serverless architecture
  • Knowledge in architecture patterns such as; CQRS, event-sourcing Design, develop, and deploy microservices and serverless applications using Node.js , TypeScript, and AWSUnit tests using Jest -, along with Supertest and Postman as supporting tools.
  • Experience with NestJs Good knowledge in multi-threaded and socket programming Instinctive desire to maintain code quality,tidiness and zero technical debtStrong understanding of testing practices (TDD/BDD), with tools like Jest, Supertest, and PostmanGood with API and its design/protocol e.g. Restful, Websocket, SOAP
  • Good understanding of Request/Response vs Async protocol Familiarity with production-grade monitoring, logging, and alerting
  • Can work with various databases to match query and storage requirements e.g. DynamoDB ,SQL, DocumentDB
  • Build and maintain scalable REST APIs integrated with DynamoDB, S3, SNS/SQS, Step Functions, and Lambda
  • Experience in cloud native architecture Understanding of data lake and data warehousing Knowledge in secured coding e.g. OWASP, XSS, CORS Experience in authentication standards and platforms e.g. JWT, OAuth, IdentityFederation
  • Experience in AWS Cloud environment

AWS Serverless architecture

Microservices

Blue Green Deployments

Own CI/CD processes using CodePipeline,

CodeBuild, and CodeDeployInfrastructure As a Code (IAC): Terraform, Cloudformation

AWS Devops SNS, SQS, EventBridge, Step Functions ElastiCache Loading Balancing, Route53, CloudFront, ECS,ECR, Auto-Scaling S3, RDS,

DynamoDB,DocumentDB CodePipeline, CodeBuild, CodeDeploy

Improve observability using CloudWatch, X-Ray, and other monitoring tools

  • Proven track record in developing and maintaining mission-critical high-load productionsystems with SLA 99.999 %
  • Proven track record in supporting rapid and agile product deployments to different Contribute to and evolve our technical architecture and engineering processes
  • Participate in system design and architecture reviewsenvironments - dev, test, stress-testing, staging/production.

Attributes:

  • Loves challenging the status-quo
  • Ability to work autonomously yet collaboratively
  • Prepared to be bold yet consistent with your engineering principles
  • Logical, ethnical, mature and responsible
  • Fast learner, humble and loves to share knowledge
  • Calm and exercises positive level of stress in exceptional circumstances such as;production issues, timeline requirements

Qualifications and experience

  • Minimum of a Bachelor degree in software engineering (or related)
  • 5+ years of working experience in a technical hands on software engineering role
  • Demonstrable experiences in developing mission-critical systems

Bonus Points

  • Experience in fintech.AWS Certified Solutions Architect (Associate or Professional)
  • Experience in working within a high-growth environment
  • Experience in other programming languages
  • Experience in payments
  • Exposure to Domain-Driven Design (DDD)Experience with PCI compliant environments (PCI-DSS, etc)
company icon

Zeller

calendar icon

Today

AWS Infra Engineer

Dear Candidate

Greetings from TATA Consultancy Services

Job Openings at TCS

Skill: AWS Infra Engineers

Exp range: 5 yrs to 10 yrs

Role: Permanent Role

Preferred location: Hyderabad, Indore , Pune

Pls find the Job Description below.

  • Role: AWS Infra Engineers
  • Required technical Skillset: Security (IAM, Security Group), AWS Infra services (EC2, S3, Storage Snapshot etc.), Networking (Direct Connect, VPC, Subnet etc.)
  • AWS Account Management: Oversee and manage AWS accounts, including the setup and maintenance of resources and services such as Redshift, RDS, Glue, Lambda, and Step Functions
  • Networking and Security: Configure and manage network components, including VPCs, subnets, and security groups, while ensuring robust security practices and compliance.
  • User Management: Set up and manage IAM policies to control user access and permissions across AWS accounts.
  • Hybrid Ecosystem Management: Administer and integrate AWS resources within a hybrid environment, ensuring seamless operation between on-premises and cloud systems.
  • Disaster Recovery: Develop and execute disaster recovery plans to ensure data protection and business continuity.
  • Infrastructure as Code (IaC) Knowledge: Experience with Terraform for automating infrastructure provisioning and configuration is a plus.

If you are Interested in the above opportunity kindly share your updated resume to immediately with the details below (Mandatory)

Name:

Contact No.

Email id:

Total exp:

Relevant Exp:

Fulltime highest qualification (Year of completion with percentage scored):

Current organization details (Payroll company):

Current CTC:

Expected CTC:

Notice period:

Current location:

Any gaps between your education or career (If yes pls specify the duration):

Available for the Face-to-face interview as on 20th June'25 (Yes/No):

company icon

Tata Consultancy Services

calendar icon

Today

Gen AI Engineer- RAG, Vertex AI, AWS Bedrock

About Us

We're an early-stage startup building LLM-native products that turn unstructured documents into intelligent, usable insights. We work with RAG pipelines, multi-cloud LLMs, and fast data processing - and we're looking for someone who can build, deploy, and own these systems end-to-end.

Key Responsibilities:

RAG Application Development:

Design and build end-to-end Retrieval-Augmented Generation (RAG) pipelines using LLMs deployed on Vertex AI and AWS Bedrock, integrated with Quadrant for vector search.

OCR & Multimodal Data Extraction:

Use OCR tools (e.g., Textract) and vision-language models (VLMs) to extract structured and unstructured data from PDFs, images, and multimodal content.

LLM Orchestration & Agent Design:

Build and optimize workflows using LangChain, LlamaIndex, and custom agent frameworks. Implement autonomous task execution using agent strategies like ReAct, Function Calling, and tool-use APIs.

API & Streaming Interfaces:

Build and expose production-ready APIs (e.g., with FastAPI) for LLM services, and implement streaming outputs for real-time response generation and latency optimization.

Data Pipelines & Retrieval:

Develop pipelines for ingestion, chunking, embedding, and storage using Quadrant and PostgreSQL, applying hybrid retrieval techniques (dense + keyword search), rerankers, GraphRAG.

Serverless AI Workflows:

Deploy serverless ML components (e.g., AWS Lambda, GCP Cloud Functions) for scalable inference and data processing.

MLOps & Model Evaluation:

Deploy, monitor, and iterate on AI systems with lightweight MLOps workflows (Docker, MLflow, CI/CD). Benchmark and evaluate embeddings, retrieval strategies, and model performance.

Qualifications:

  • Strong Python development skills (must-have).
  • LLMs: Claude and Gemini models
  • Experience building AI agents and LLM-powered reasoning pipelines.
  • Deep understanding of embeddings, vector search, and hybrid retrieval techniques.
  • Experience with Quadrant DB
  • Experience designing multi-step task automation and execution chains.
  • Streaming: Ability to implement and debug LLM streaming and async flows
  • Knowledge of memory and context management strategies for LLM agents (e.g., vector memory, scratchpad memory, episodic memory).
  • Experience with AWS Lambda for serverless AI workflows and API integrations.
  • Bonus: LLM fine-tuning, multimodal data processing, knowledge graph integration, or advanced AI planning techniques.
  • Prior experience at startups only ( not IT services or Enterprises) and short notice period

Who You Are

  • 2-4 years of real-world AI/ML experience, ideally with production LLM apps
  • Startup-ready: fast, hands-on, comfortable with ambiguity
  • Clear communicator who can take ownership and push features end-to-end
  • Available to join immediately

Why Join Us?

  • Founding-level role with high ownership
  • Build systems from scratch using the latest AI stack
  • Fully remote, async-friendly, fast-paced team
company icon

Bryckel AI

calendar icon

Today

We're Hiring: Senior Python & AWS Developer

We're Hiring: Senior Python & AWS Developer (Min. 7+ years)

Location: Remote (India)

Experience: 7+ years

About the Role

Are you a hands-on Python developer with strong AWS experience? We're looking for a Senior Developer who can build scalable applications, develop powerful integrations, handle complex data processing-and even migrate legacy applications to modern cloud-native environments.

This role is NOT DevOps or infrastructure-focused. We're seeking application engineers who can write clean code, build real features, and own integrations from end to end.

What You'll Do

Develop robust, scalable business applications using Python and AWS

Build API-based integrations across platforms and partners

Migrate existing apps/platforms to AWS-native architecture

Write data transformation pipelines with Pandas / PySpark

Leverage AWS tools like Lambda, RDS, SQS, SNS, API Gateway, and Kinesis

Monitor and troubleshoot using CloudWatch and performance metrics

Work closely with business teams to deliver on requirements

Tech Stack You Should Know

Python (Strong in core development and scripting)

Pandas, PySpark for data manipulation

AWS Services : Lambda, API Gateway, SQS, SNS, DynamoDB, RDS, CloudWatch, Kinesis

PostgreSQL or other relational DBs

Shell Scripting

REST APIs & JSON/XML-based system integrations

You're a Great Fit If You Have:

7+ years in Python development (not just scripting or automation)

Hands-on experience building & integrating real applications

Experience migrating applications from legacy/on-prem to AWS

Proficiency in data-heavy workflows and application logic , not just AWS tools

Strong understanding of software engineering best practices

Candidates who challange themselves and Flexible in learning new technologies /frameowork

Bonus Points For:

Experience building serverless workflows from scratch

AWS Developer certification

Familiarity with CI/CD (from a dev perspective)

Exprience in Java

Soft Skills That Matter:

Clear communication & documentation

Strong problem-solving & debugging skills

Self-driven & collaborative

How to Apply :

If your profile matches, please send your resume to:

Also mention the following in your email:

  • Full Name
  • Mobile Number
  • Total Experience
  • Relevant Experience (Python & AWS)
  • Current Location
  • Current CTC
  • Expected CTC
  • Notice Period
  • Are you open to remote long-term work?

Important Note:

This is a full-time engagement.

Moonlighting or working on multiple jobs in parallel is strictly not allowed as per client policy.

We are looking for candidates who can fully commit to this role and work exclusively with us during the engagement.

company icon

Strive4X Infotech Private Limited

calendar icon

Today

AWS Cloud Solutions Architect / Cloud Presales Engineer

Job Title: AWS Cloud Solutions Architect / Cloud Presales Engineer

Location: Remote (Preferred: Candidates working in overlapping hours with U.S. time zones)

Job Summary:

We are seeking a seasoned AWS Cloud Solutions Architect / Cloud Presales Engineer who can design scalable, secure, and cost-effective AWS architectures while delivering accurate pricing models. This role is critical in pre-sales and execution phases, requiring a deep understanding of AWS services, cloud economics, and infrastructure automation. The ideal candidate will bridge the gap between technical solutioning and financial impact, working closely with clients and internal stakeholders to provide actionable, scalable cloud strategies.

Key Responsibilities:

1.Engage with clients to understand existing infrastructure and future scalability goals.

Assess client environments and design right-sized AWS architecture aligned with business and performance needs.

2.Provide accurate cost estimations using tools like AWS Pricing Calculator and TCO tools.

Identify and recommend cost optimization opportunities (e.g., Reserved Instances, Savings Plans, Spot Instances).

3.Prepare and present technical proposals and architecture diagrams during pre-sales engagements.

4.Create Infrastructure-as-Code templates using CloudFormation or Terraform for repeatable deployment.

5.Use AWS Well-Architected Tool to validate architecture best practices.

6.Collaborate with engineering teams to implement cloud solutions or guide them through architecture deployment.

7.Stay current with AWS service updates and FinOps best practices.

Required Skills & Experience:

Cloud Architecture: EC2, RDS, Auto Scaling, ELB, S3, VPC, ECS, Lambda

Cost Estimation & FinOps:

Proficient with AWS Pricing Calculator, TCO tools, Cost Explorer

Experience recommending Savings Plans and RI purchases

Basic to intermediate knowledge of FinOps practices (FinOps Certified Practitioner a plus)

DevOps & Tools:

Infrastructure as Code: CloudFormation and/or Terraform

AWS Well-Architected Tool for reviews and assessments

Certifications:

Required: AWS Certified Solutions Architect - Associate

Preferred: AWS Certified Solutions Architect - Professional

Bonus: FinOps Certified Practitioner or equivalent

Preferred Qualifications:

1.5+ years in cloud solution architecture or pre-sales engineering roles

2.Experience working directly with enterprise clients or in a client-facing role

3.Ability to communicate complex technical ideas in clear business terms

4.Strong presentation and documentation skills

Nice to Have:

1.Hands-on experience with cloud migration projects

2.Background in cloud cost governance and optimization programs

3.Familiarity with other cloud platforms (Azure/GCP) is a plus

company icon

MokshaaLLC

calendar icon

Today

AWS DevOps

KPMG in India, a professional services firm, is the Indian member firm affiliated with KPMG International and was established in September 1993. Our professionals leverage the global network of firms, providing detailed knowledge of local laws, regulations, markets, and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, and Vadodara.

KPMG in India offers services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focussed, and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment.

KPMG Advisory professionals provide advice and assistance to enable companies, intermediaries, and public sector bodies to mitigate risk, improve performance, and create value. KPMG firms provide a wide range of Risk Advisory and Financial Advisory Services that can help clients respond to immediate needs as well as put in place the strategies for the longer term.

Job Title: Consultant

Skills: AWS DevOps

Location: WFH

Shift timing: 06:30 PM - 03:30 PM IST

Responsibilities

  • Infrastructure Design & Implementation: Design, build and manage cloud infrastructure on AWS, including using tools like Lambda, API Gateway, DynamoDB and S3.
  • Container Orchestration: Support container environments using Docker, Rancher and Kubernets.
  • CI/CD Pipeline Management: Develop and optimize CI/CD pipelines using tools like Jenkins, GitHub and Harness.
  • Application Deployment and Management: Deploy, maintain and manage applications on AWS, ensuring high availability and performance.
  • Security & Compliance: Implement and enforce security measures to protect cloud infrastructure and applications.
  • Troubleshooting and resolve issues related to cloud infrastructure and applications.

Education Qualification and Experience

  • AWS Expertise: Proficiency in AWS services and tools, including EC2, VPC, S3, Lambda, API Gateway, DynamoDB and IAM.
  • Containerization and Orchestration: Experience with Docker, Rancher and Kubernets.
  • CI/CD Pipelines: Familiarity with CI/CD tools like Jenkins, GitLab and GitHub Actions.
  • Cloud Native: Knowledge of cloud-native concepts such as micro-services, server-less functions and containerization.
  • Experience with DevOps and DevSecOps principles and practices.
  • Ability to troubleshoot and resolve complex issues.
  • Ability to communicate effectively with both technical and non technical stakeholder.
  • Bachelor's degree required in computer science, software engineering.

Note: In future, Onsite travel might be required.

Equal employment opportunity information

KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their colour, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability, or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavour for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.

company icon

KPMG India

calendar icon

Today

AWS DevOps Engineer - Immediate Joiner

Company Description

IENERGY NETZERO is a cutting-edge EHS and ESG software platform that empowers organizations with digital systems for Environment, Health & Safety, Sustainability, and ESG reporting. Leveraging decades of domain expertise, it integrates real-time data to streamline quality, operational risk, compliance, and product stewardship, driving measurable business outcomes. Trusted by over 15,000 users, including Fortune 500 companies, the platform accelerates ROI and user adoption through advanced technology and AI-powered insights. IENERGY NETZERO includes advanced platforms such as IENERGY AURORA , IENERGY AVALANCHE , IENERGY VISVA , and IENERGY ICEBERG .

Role Description

This is a full-time on-site role for an AWS DevOps Engineer - Immediate Joiner, located in Noida. The AWS DevOps Engineer will be responsible for managing and automating infrastructure, developing software, administering systems, and maintaining continuous integration pipelines. The role involves working with Linux systems and ensuring the seamless deployment and operation of scalable and robust cloud solutions.

Qualifications

  • Minimum 2-3 years of Experience as DevOps Engineer
  • Experience with Infrastructure as Code and Continuous Integration
  • Managing Code Repositories and CI/CD pipeline
  • Expertise in System Administration and Linux
  • Provisioning and Management of AWS Resources
  • Knowledge of ELK, DevSecOps
  • Familiarity with AWS services and architecture is a plus
  • Relevant certifications in AWS or DevOps are beneficial
  • Bachelor's degree in Computer Science, Engineering, or related field
company icon

IENERGY

calendar icon

Today

AWS Data Engineer

• Job Profile:

Focuses on the data backend. Designs/optimizes the Redshift schema for conversational BI queries. Builds and maintains the semantic layer. Develops data pipelines if needed. Ensures efficient and secure data access.

• Required Skills:

Expertise in Amazon Redshift: Schema design, performance tuning (distribution keys, sort keys, WLM), query optimization, security.

SQL Mastery: Complex queries, window functions, CTEs.

Data Modeling: Star/snowflake schemas, dimensional modeling.

ETL/ELT Development: Experience with tools like AWS Glue, Apache Airflow, dbt.

Semantic Layer Development: Understanding how to map business terms to physical data structures.

Scripting (Python preferred).

Good to have DBT knowledge

company icon

Tredence Inc.

calendar icon

Today

Data Engineer Data Bricks with AWS 5+ Years HYD/ Bang/ Pune/ Mum Full Time

Hi All,

We are hiring Data Engineers with AWS, Python

Data Engineer- Data Bricks with AWS

Experience: - 5+ Years

Location: - Hyderabad/ Bangalore/ Mumbai/ Pune

Work from office all 5 Days

Immediate to 15 Days Only

Job Description:

  1. Proficiency in data engineering programming languages (preferably Python, alternatively Scala)
  2. Proficiency in cluster computing frameworks e.g. Spark
  3. Proficiency in at least one cloud data Lakehouse platforms (preferably AWS data lake services or Databricks)
  4. Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar)
  5. Proficiency with data structures, data serialization formats (JSON, AVRO or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), TDD (or BDD) and CI/CD tools (Jenkins, Git,)
  6. Strong organizational, problem-solving and critical thinking skills; Strong documentation skills

company icon

Tekgence Inc

calendar icon

Today

Senior AWS Data Engineer

What be you'll doing?

  • Build and deploy the infrastructure for ingesting high-volume support data from consumer interactions, devices, and apps.
  • Design and implement the processes that turn data into insights. Model and mine the data to describe the system's behaviour and to predict future actions.
  • Enable data driven change Build effective visualizations and reports presenting data insights to all stakeholders, internal (corporate) and external (our SaaS customers)
  • Develop and maintain the data-related code in an automated CI/CD build/test/deploy environment
  • Generate specific reports needed across tenants to allow insight into agent performance and business effectiveness.
  • Research individually and in collaboration with other teams on how to solve problems

What we seek in you:

  • Bachelor's degree in computer science, Information Systems, or a related field.
  • Minimum of 5+ years of relevant working experience in data engineering.
  • Experience working with cloud Data Warehouse solutions and AWS Cloud-based solutions.
  • Must have strong experience with AWS Glue, DMS, Snowflake.
  • Advanced SQL skills and experience with relational databases and database design.
  • Experience with working on large data sets and distributed computing like Spark
  • Strong proficiency in data pipeline and workflow management tools (Airflow).
  • Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team.
  • Is a self-starter and action biased. Strong in communications that handles stake holder communications.
  • Follows agile methodology to work, collaborate and deliver in a global team set up. Ability to learn and adapt quickly.

Life at Next:

At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth.

Perks of working with us:

  • Clear objectives to ensure alignment with our mission, fostering your meaningful contribution.
  • Abundant opportunities for engagement with customers, product managers, and leadership.
  • You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions.
  • Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory.
  • Embrace continuous learning and upskilling opportunities through Nexversity.
  • Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones.
  • Embark on accelerated career paths to actualize your professional aspirations.

Who we are?

We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers.

Join our passionate team and tailor your growth with us!

company icon

TVS Next

calendar icon

Today

Senior AWS - Windows implementation Engineer

Position: Senior AWS - Windows implementation Engineer

Experience: 8+years

Location: Chennai/Delhi/Bangalore/Coimbatore

Joining : Immediate joiners

Responsibilities:

  • Lead and implement AWS account-to-account migration strategies and execution.
  • Design and deploy AWS Landing Zones and ensure secure, scalable cloud environments.
  • Develop and maintain Terraform code for infrastructure provisioning and automation.
  • Configure and manage Security Groups, IAM roles, VPCs, and networking components.
  • Deploy and manage EC2 instances for Suse Linux
  • Collaborate with security, networking, and application teams to ensure seamless migration.
  • Troubleshoot and resolve issues related to cloud infrastructure and deployments.
  • Document architecture, processes, and best practices.

Required Skills & Qualifications:

  • 7+ years of experience in cloud infrastructure engineering, with a focus on AWS.
  • Strong hands-on experience with Terraform and Infrastructure as Code (IaC).
  • Proficiency in Linux and Windows system administration.
  • Deep understanding of AWS services: EC2, VPC, IAM, S3, CloudWatch, CloudTrail, etc.
  • Experience with AWS Landing Zone setup and governance.
  • Expertise in Security Group design and implementation.
  • Proven experience in EC2 deployment automation and configuration management.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Excellent problem-solving and communication skills.

Preferred Qualifications:

  • AWS Certifications (e.g., AWS Certified Solutions Architect - Professional).
  • Experience with multi-account AWS environments and Control Tower.
  • Knowledge of bash scripts , Terraform, or other automation tools.

If interested, please send an updated CV to along with the following details

Total experience:

Current Salary:

Expected Salary:

Notice Period:

Current Location:

Crystal Solutions

Leading Recruitment Service Provider

Pranali Dahiwele

Talent Acquisition Specialist

Website :

company icon

Leading IT Company in India

calendar icon

Today

AWS Engineer

Job description-

  • Design, implement, and manage scalable cloud infrastructure using AWS services like EC2, S3, RDS, Lambda, and VPC.
  • Collaborate with DevOps and development teams to implement automation for infrastructure provisioning, configuration, and deployment.
  • Monitor and optimize AWS environments for cost, performance, and security.
  • Troubleshoot cloud-based infrastructure issues, identify bottlenecks, and implement solutions.
  • Develop scripts for cloud infrastructure automation and CI/CD pipelines.
  • Implement and enforce best practices for cloud security and data protection.
  • Provide technical expertise and recommendations on cloud-based applications, architecture, and services.
  • Perform cloud cost optimization to ensure the efficient use of resources.
  • Maintain documentation of infrastructure, configurations, and cloud architecture.
  • Assist in the migration of on-premises applications to AWS cloud infrastructure.

Requirements:

  • Bachelor's degree in computer science, Information Technology, or related field, or equivalent practical experience.
  • Experience 3 Years.
  • Strong scripting skills in React and Node.js other automation languages.
  • Deployed containerized Node.js/React apps on AWS ECS.
  • Proven experience working with AWS services, including EC2, S3, VPC, IAM, Lambda, CloudFormation, and RDS.
  • Strong knowledge of cloud infrastructure, networking, and security best practices.
  • Experience with infrastructure automation tools such as Terraform, CloudFormation, or Ansible.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Experience with CI/CD pipelines and DevOps methodologies.
  • Knowledge of monitoring and logging tools such as CloudWatch, Datadog, or Splunk.
  • Excellent problem-solving, troubleshooting, and analytical skills.
  • Strong communication and collaboration skills.

Preferred Qualifications:

  • AWS Certified Solutions Architect Associate or Professional.
  • Experience with serverless architectures and AWS Lambda.
  • Knowledge of microservices architecture and API integrations.
  • Familiarity with infrastructure-as-code practices and tools.
  • Experience with high-availability, disaster recovery, and scaling architectures on AWS.

company icon

TAC Security

calendar icon

Today

Technical Architect (Dotnet + AWS)

Job Title

:

DotNet Architect

Shift Timings

:

11 AM to 8 PM IST

Location

:

Remote, Work from Home

Experience Required

:

10 - 14 Years

Candidates must be able to work in a fast-paced environment, possess strong technical, analytical, and excellent communication skills, have a positive attitude, willing to learn new technologies and be able to efficiently organize their work.

Technologies:

C#, AWS Lambda, Azure Functions, MSSQL,

Spreadsheet Gear Engine (Excel Reporting)- Good to have

Requirement:

  • Proficiency in data modelling and code-driven database schema management.
  • Experience with serverless architecture (AWS functions)
  • Experience working with content management systems (Umbraco preferred)
  • Experience with source control systems like GitHub.
  • Writes high quality and thoroughly tested code that meets business requirements.
  • Experience with CI/CD approaches with technologies like Azure pipelines and/or GitHub actions, docker - good to have
  • Good communication skills
  • Professional, pro-active and ability to work in an agile environment.
company icon

Programmers.io

calendar icon

Today

Senior BI Consultant(Python scripting, AWS services) with fast growing Software company, Baner, Pune.

Position: - Senior BI Consultant

Years of Experience - 6+ Years

Qualifications -Any Graduate

Job Location: Pune, Baner

Job Summary

We are seeking a highly skilled Senior BI Consultant with extensive experience in designing,

developing, and maintaining scalable Business Intelligence and data warehousing solutions. This role requires a strong background in Python scripting, AWS services (Lambda, S3, Redshift), and data science methodologies to extract actionable insights and support data-driven decision-making. The candidate will also be responsible for ensuring adherence to data governance standards and best practices while collaborating closely with stakeholders, data engineers, and data scientists to deliver robust BI solutions.

Key Responsibilities:

Data Integration and Pipeline Development

Design, implement, and optimize ETL/ELT pipelines to ingest, transform, and load data into AWS Redshift from various sources.

Write efficient and reusable Python scripts to automate data processing workflows.

Utilize AWS Lambda for serverless data processing and real-time data transformations.

Data Science and Analytics

Apply data science techniques to analyze large datasets, develop predictive models, and

deliver actionable insights.

Integrate machine learning models and statistical analysis into BI solutions.

Conduct A/B testing and advanced analytics to support strategic business decisions

Manage and monitor AWS S3 storage, ensuring data integrity, availability, and performance.

Data Warehousing and Database Management

Architect and maintain data warehouses and data marts to support reporting and analytical

needs.

Implement and enforce data modeling best practices to ensure scalability and performance.

Data Governance and Security

Establish and enforce data governance frameworks to ensure data quality, security, and

compliance with industry standards.

Collaborate with cross-functional teams to define data policies and implement robust access

controls for sensitive data.

Maintain data cataloging and documentation to ensure transparency and ease of access

across the organization.

Collaboration and Leadership

Lead BI development efforts, mentor junior team members, and share best practices.

Work closely with data engineers, data scientists, and business analysts to deliver end-toend solutions.

Stay updated with emerging trends in data warehousing, BI, and cloud technologies to

recommend innovative approaches.

Required Skills and Qualifications

6+ years of experience in BI development, data engineering, or a related field.

Strong expertise in Python/R scripting for data processing and automation.

Hands-on experience with AWS services: Lambda, S3, and Redshift.

Proven experience in data warehousing design, implementation, and management.

Proficiency in SQL for advanced query writing and optimization.

Deep understanding of data governance principles, including data quality, security, and

compliance.

Experience with data visualization tools such as Power BI or similar.

company icon

Seventh Contact Hiring Solutions

calendar icon

Today

Technical Architect (Dotnet + AWS)

Job Title

:

DotNet Architect

Shift Timings

:

11 AM to 8 PM IST

Location

:

Remote, Work from Home

Experience Required

:

10 - 14 Years

Candidates must be able to work in a fast-paced environment, possess strong technical, analytical, and excellent communication skills, have a positive attitude, willing to learn new technologies and be able to efficiently organize their work.

Technologies:

C#, AWS Lambda, Azure Functions, MSSQL,

Spreadsheet Gear Engine (Excel Reporting)- Good to have

Requirement:

  • Proficiency in data modelling and code-driven database schema management.
  • Experience with serverless architecture (AWS functions)
  • Experience working with content management systems (Umbraco preferred)
  • Experience with source control systems like GitHub.
  • Writes high quality and thoroughly tested code that meets business requirements.
  • Experience with CI/CD approaches with technologies like Azure pipelines and/or GitHub actions, docker - good to have
  • Good communication skills
  • Professional, pro-active and ability to work in an agile environment.
company icon

Programmers.io

calendar icon

Today

AWS IAM Engineer

Skills (specific programming languages):

  • In depth knowledge of IAM for at least one public cloud.
  • Architect and Automate the management of Cloud IAM.
  • Support the Identity and Access Management Identity Services and Capabilities team in the Technology Risk & Information Security organization
  • Maintain and provide recommendations on operational IAM components in cloud
  • Develop backend API using Python/Golang/JAVA
  • Hands on experience on one of the public clouds - AWS/GCP
company icon

IntraEdge

calendar icon

Today

AWS Admin

Company Profile: Quantum Integrators is an international strategy and business consulting group whose mission is to help clients create and sustain competitive advantage. As a truly client-focused firm, our highly trained and experienced consultants consider it their mandate to help client organizations achieve a 'quantum state.

Position: AWS Admin - Offshore

Location: Pune/Nagpur, Maharashtra (Onsite)

Experience: 4-6 years

Job Description:

Need certified consultants who hold a valid AWS Solution Architect - Professional or Specialty Level.

Key Responsibilities:

  • Manage and optimize cloud infrastructure on Azure and AWS platforms.
  • Develop and maintain CI/CD pipelines to ensure smooth deployment processes.
  • Automate infrastructure provisioning, configuration, and deployment.
  • Monitor cloud infrastructure performance and implement improvements.
  • Ensure security and compliance across all cloud environments.
  • Troubleshoot and resolve issues related to cloud infrastructure and services.
  • Collaborate with development and operations teams to achieve scalability and reliability.

Qualifications:

  • Experience in managing CI/CD pipelines using tools such as Jenkins
  • Knowledge of scripting languages such as Python, PowerShell, or Bash.
  • Excellent problem-solving and troubleshooting skills.
  • Good communication and collaboration skills.

Skills:

  • Azure
  • AWS
  • Azure Data Factory
  • Databricks
  • DevOps
  • CI/CD
  • Jenkins
  • Bitbucket
  • Python
  • PowerShell
  • Bas
company icon

Quantum Integrators

calendar icon

Today

Senior Cloud Infrastructure Engineer AWS

Job Summary:

We are seeking a highly skilled and experienced Senior Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions.

Responsibilities:

Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS).

Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability.

Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments.

Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments.

Support deployment of infrastructure lambda functions.

Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment.

Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management.

Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments.

Ensure auditability and observability of pipeline states.

Implement security best practices, audit, and compliance requirements within the infrastructure.

Engage with clients to understand their technical and business requirements, and provide tailored solutions.

If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads.

Troubleshoot and resolve complex infrastructure issues.

Qualifications:

6+ years of experience in Infrastructure Engineering or similar role.

Extensive experience with Amazon Web Services (AWS).

Proven ability to architect for scale, availability, and high-performance workloads.

Deep knowledge of Infrastructure as Code (IaC) with Terraform.

Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD).

Solid understanding of git, branching models, CI/CD pipelines and deployment strategies.

Experience with security, audit, and compliance best practices.

Excellent problem-solving and analytical skills.

Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders.

Experience in technical mentoring, team-forming and fostering self-organization and ownership.

Experience with client relationship management and project planning.

Certifications:

Relevant certifications (e.g., Kubernetes Certified Administrator, AWS Certified Machine Learning Engineer - Associate, AWS Certified Data Engineer - Associate, AWS Certified Developer - Associate, etc.).

Software development experience (e.g., Terraform, Python).

Experience/Exposure with machine learning infrastructure.

Education:

B.Tech/BE in computer sciences, a related field or equivalent experience.

company icon

Egen

calendar icon

Today

AWS - Software Engineer III

AWS -Software Engineer III

Yrs Of Exp : 9-13

Location : Chennai (Work from office )

WORK EXPERIENCE-

Experience working on RESTful Web Services, Microservices, Java Spring boot, ReactJS.

Experience building Web/Mobile applications both UI and Backend (Fullstack developer).

6+ Years' of consulting experience in AWS - Application setup, monitoring, setting up alerts, logging, tuning and so forth.

Able work as a junior level AWS Architect.

Exposure to other cloud platforms like Azure, SAP BTP.

Experience in working environments using Agile (SCRUM) and Test-Driven Development (TDD) methodologies.

Experience with building CI/CD pipelines using Gitlab (DevOps role).

CERTIFICATIONS - Nice to have at least one AWS certification.

company icon

Pi Square Technologies

calendar icon

Today

AWS Developer

Primary Skill - AWS, Lambda

Exp-4 to 12 Years.

Location-LTIMindtree Location

AWS Developer -

Glue, Lambda, Secret Manager, PySpark, Python, S3, Cloudformation. Looking for an experienced AWS developer to join our team and help us build and maintain data pipelines and analytics solutions in the cloud. Resource will be responsible for developing and implementing AWS-based solutions using a range of tools and technologies, including AWS Glue, Lambda, Secret Manager, PySpark, and AWS S3.

Requirements:- 4+ years of experience as an AWS developer, with a focus on data pipelines and analytics solutions.

Strong experience with AWS Glue, PySpark, and AWS S3.

Proficient in Python programming language and AWS Lambda.

Knowledge of AWS Secret Manager and best practices for managing credentials and other sensitive data.

Experience with data modeling and database design.

company icon

LTIMindtree

calendar icon

Today

Back End Developer (Python/AWS)

Job Description: Back End Engineer (Python/AWS)

Who We Are

GlueX Protocol is a pioneering software provider powering decentralized applications (dApps) across major EVM-based blockchains like Ethereum, Arbitrum, and Base. We build mission-critical infrastructure for top DeFi protocols with over $10B+ in settled transactions. Our team solves complex challenges in blockchain scalability, intent-based systems, and real-time liquidity routing.

Learn more about our work on our

What You Will Do

  • Design, build, and optimize Python-based microservices (Flask/Quart) for high-throughput financial systems
  • Architect and deploy cloud infrastructure on AWS (API Gateway, Lambda, EC2, DynamoDB, RDS)
  • Implement containerized solutions using Docker/Kubernetes
  • Develop CI/CD pipelines and infrastructure-as-code practices
  • Create monitoring systems with CloudWatch and implement auto-scaling solutions
  • Collaborate on blockchain integration layers (no prior Web3 experience required)
  • Optimize performance across distributed systems handling 100K+ RPM

Requirements

Must-Have:

  • 3+ years backend development with Python (typing, async, best practices)
  • Experience with Python web frameworks (Flask, Quart, FastAPI)
  • Proficiency in AWS ecosystem: DynamoDB, Lambda, EC2, CloudWatch, RDS
  • Containerization expertise (Docker/Kubernetes)
  • Strong understanding of REST API design and database optimization
  • Ability to architect fault-tolerant distributed systems

Nice-to-Have:

  • Experience with infrastructure-as-code (Terraform/CDK)
  • Knowledge of WebSockets or real-time data pipelines
  • Background in financial systems or high-frequency applications

Why Join GlueX?

Impact: Build infrastructure powering the next generation of DeFi

Tech Challenge: Solve unique distributed systems problems at scale

Growth: Learn blockchain integration from industry experts

Team: Work with battle-tested engineers (ex-Alameda, Jump, Uniswap)

Benefits:

  • Competitive salary + equity
  • Fully remote with flexible hours
  • How to Apply

    Send your CV and GitHub/Coding samples to with subject:

    "Back End Engineer - Your Name - Python/AWS"

    Include 1-2 sentences about the most complex system you've optimized.

    company icon

    GlueX Protocol

    calendar icon

    Today

    Cloud Infrastructure Lead (AWS)

    Job Summary:

    We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical

    leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions.

    Responsibilities:

    Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS).

    Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability.

    Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments.

    Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments.

    Support deployment of infrastructure lambda functions.

    Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment.

    Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management.

    Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments.

    Ensure auditability and observability of pipeline states.

    Implement security best practices, audit, and compliance requirements within the infrastructure.

    Provide technical leadership, mentorship, and training to engineering staff.

    Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads.

    Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams.

    Qualifications:

    10+ years of experience in an Infrastructure Engineer or similar role.

    Extensive experience with Amazon Web Services (AWS).

    Proven ability to architect for scale, availability, and high-performance workloads.

    Ability to plan and execute zero-disruption migrations.

    Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC.

    Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK.

    Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD).

    Solid understanding of git, branching models, CI/CD pipelines and deployment strategies.

    Experience with security, audit, and compliance best practices.

    Excellent problem-solving and analytical skills.

    Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders.

    Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership.

    Experience with client relationship management and project planning.

    Certifications:

    Relevant certifications (for example Kubernetes Certified Administrator, AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional etc.).

    Software development experience (for example Terraform, Python).

    Experience with machine learning infrastructure.

    Education:

    B.Tech /BE in computer science, a related field or equivalent experience.

    company icon

    Egen

    calendar icon

    Today

    Backend Engineer - AWS & Java

    Job Summary

    We're looking for a skilled Backend Engineer with solid experience in AWS, Java, and building scalable backend systems. The role involves working on APIs, microservices, and data-driven integrations in an Agile setup. Prior experience with FlexPLM is a plus but not required.

    Key Responsibilities

    • Develop and maintain REST APIs and microservices using Java, Spring, and AWS.
    • Work hands-on with AWS services like DynamoDB, Lambda, and S3.
    • Support backend design and data integration across systems.
    • Contribute to ongoing development sprints and cross-team collaboration.
    • Participate in code reviews, debugging, and performance tuning.

    Required Skills

    • 5+ years of experience in backend development.
    • Hands-on experience with AWS (especially DynamoDB, Lambda, S3).
    • Strong in Java, Spring Framework, SQL, and working with RDBMS.
    • Experience in building or rewriting APIs and microservices.
    • Exposure to Agile development and working with distributed teams.

    Nice to Have

    • Familiarity with FlexPLM configuration or customization.
    • Understanding of CI/CD practices or DevOps tooling.
    • Experience with data migration or enterprise system integration.

    company icon

    ITC Infotech

    calendar icon

    Today

    TCS Hiring for .Net Backend developer with Azure/AWS

    TCS Hiring for .Net Backend developer with Azure/AWS

    Experience: 5 to 12 Years Only

    Job Location: Mumbai

    TCS Hiring for .Net Backend developer with Azure/AWS

    Required Technical Skill Set:

    • Strong experience in C# and .NET development
    • Working experience with Relational DBs and SQL expertise
    • Good knowledge on containerization via Docker
    • Experience working with AWS cloud, Azure DevOps, CI/CD pipelines

    Kind Regards,

    Priyankha M

    company icon

    Tata Consultancy Services

    calendar icon

    Today

    AWS Devops Engineer

    Job Title: Senior AWS Devops

    Location: Remote

    Shift: Night Shift - 7.30 PM - 4.30 AM IST

    Skillset:

    • Looking for more advanced AWS knowledge, security experience with scanning containers and packages for vulnerabilities, and strong Terraform infrastructure as a code skill
    • Title: Senior AWS DevOps Engineer
    • Level of Seniority/Years of Experience: Mid-Senior Level
    • Communication Importance: Important
    • Top 3-5:
    • AWS Knowledge
    • Scanning containers and packages for vulnerabilities
    • Strong Terraform (IaC)
    company icon

    Dexian India

    calendar icon

    Today

    Technical Architect (Dotnet + AWS)

    Job Title

    :

    DotNet Architect

    Shift Timings

    :

    11 AM to 8 PM IST

    Location

    :

    Remote, Work from Home

    Experience Required

    :

    10 - 14 Years

    Candidates must be able to work in a fast-paced environment, possess strong technical, analytical, and excellent communication skills, have a positive attitude, willing to learn new technologies and be able to efficiently organize their work.

    Technologies:

    C#, AWS Lambda, Azure Functions, MSSQL,

    Spreadsheet Gear Engine (Excel Reporting)- Good to have

    Requirement:

    • Proficiency in data modelling and code-driven database schema management.
    • Experience with serverless architecture (AWS functions)
    • Experience working with content management systems (Umbraco preferred)
    • Experience with source control systems like GitHub.
    • Writes high quality and thoroughly tested code that meets business requirements.
    • Experience with CI/CD approaches with technologies like Azure pipelines and/or GitHub actions, docker - good to have
    • Good communication skills
    • Professional, pro-active and ability to work in an agile environment.
    company icon

    Programmers.io

    calendar icon

    Today

    Data Engineer (AWS, Python, PySpark)

    Spectral Consultants is hiring Data Engineer for one of the leading management consulting firm.

    Location - Gurgaon & Pune

    Experience - 1 to 10 years (Multiple levels are open)

    Responsibilities

    • Lead the project deliverables such as business case development, solution vision and design, user requirements, solution mockup, prototypes, and technical architecture, test cases, deployment plans, operations strategy and planning, etc.;
    • Actively lead unstructured problem solving to design and build complex solutions, tune to meet expected performance and functional requirements;
    • Lead appropriate documentation of systems design, procedure, SOP, etc.;
    • Build cloud applications using serverless technologies like custom web applications/ETL pipelines/ real time/stream analytics applications etc
    • Leverage expertise / experience of both traditional and modern data architecture and processing concepts, including relational databases.

    Qualifications

    • 5+ years of hands on experience in creating High Level and Detailed Design documents
    • Good handle on working in distributed computing and cloud services platform
    • Should have experience of working on Agile delivery framework and can mentor and coach team to follow agile best practices
    • Expertise in one of the Programming languages like Python, Scala, etc. and should be able to review the codes created by developers
    • Expertise in commonly used AWS services (or equivalent services in Azure) is preferred - EMR, Glue, EC2, Glue ETL, Managed Airflow, S3, LakeFormation, SageMaker Studio, Athena, Redshift, RDS, AWS Neptune

    company icon

    Spectral Consultants

    calendar icon

    Today

    Data Engineer (AWS, Python, PySpark)

    Spectral Consultants is hiring Data Engineer for one of the leading management consulting firm.

    Location - Gurgaon & Pune

    Experience - 1 to 10 years (Multiple levels are open)

    Responsibilities

    • Lead the project deliverables such as business case development, solution vision and design, user requirements, solution mockup, prototypes, and technical architecture, test cases, deployment plans, operations strategy and planning, etc.;
    • Actively lead unstructured problem solving to design and build complex solutions, tune to meet expected performance and functional requirements;
    • Lead appropriate documentation of systems design, procedure, SOP, etc.;
    • Build cloud applications using serverless technologies like custom web applications/ETL pipelines/ real time/stream analytics applications etc
    • Leverage expertise / experience of both traditional and modern data architecture and processing concepts, including relational databases.

    Qualifications

    • 5+ years of hands on experience in creating High Level and Detailed Design documents
    • Good handle on working in distributed computing and cloud services platform
    • Should have experience of working on Agile delivery framework and can mentor and coach team to follow agile best practices
    • Expertise in one of the Programming languages like Python, Scala, etc. and should be able to review the codes created by developers
    • Expertise in commonly used AWS services (or equivalent services in Azure) is preferred - EMR, Glue, EC2, Glue ETL, Managed Airflow, S3, LakeFormation, SageMaker Studio, Athena, Redshift, RDS, AWS Neptune

    company icon

    Spectral Consultants

    calendar icon

    Today

    Technical Architect (Dotnet + AWS)

    Job Title

    :

    DotNet Architect

    Shift Timings

    :

    11 AM to 8 PM IST

    Location

    :

    Remote, Work from Home

    Experience Required

    :

    10 - 14 Years

    Candidates must be able to work in a fast-paced environment, possess strong technical, analytical, and excellent communication skills, have a positive attitude, willing to learn new technologies and be able to efficiently organize their work.

    Technologies:

    C#, AWS Lambda, Azure Functions, MSSQL,

    Spreadsheet Gear Engine (Excel Reporting)- Good to have

    Requirement:

    • Proficiency in data modelling and code-driven database schema management.
    • Experience with serverless architecture (AWS functions)
    • Experience working with content management systems (Umbraco preferred)
    • Experience with source control systems like GitHub.
    • Writes high quality and thoroughly tested code that meets business requirements.
    • Experience with CI/CD approaches with technologies like Azure pipelines and/or GitHub actions, docker - good to have
    • Good communication skills
    • Professional, pro-active and ability to work in an agile environment.
    company icon

    Programmers.io

    calendar icon

    Today