Find Your Dream Engineer Job in India

Explore the latest Engineer job openings in India. Discover top companies hiring for Engineer roles across major cities in India and take the next step in your career.

search for jobs
google-jobsmeta-jobsamazon-jobsmicrosoft-jobsibm-jobsapple-jobsnvidia-jobssony-jobsfacebook-jobsinstagram-jobslinkedin-jobssnapchat-jobstik-tok-jobsslack-jobspinterest-jobsfigma-jobsmastercard-jobsvisa-jobstesla-jobstencent-jobsstarbucks-jobssamsung-jobsintel-jobsgoogle-jobsmeta-jobsamazon-jobsmicrosoft-jobsibm-jobsapple-jobsnvidia-jobssony-jobsfacebook-jobsinstagram-jobslinkedin-jobssnapchat-jobstik-tok-jobsslack-jobspinterest-jobsfigma-jobsmastercard-jobsvisa-jobstesla-jobstencent-jobsstarbucks-jobssamsung-jobsintel-jobs

Artificial Intelligence Engineer

Title: Senior Al/ ML Engineer. (5- 7 years of Experience)

Job Location: Hyderabad, Telangana

Job type: permanent

Work Mode: Onsite

Need only Immediate Joiners- Notice period profiles are not accepted this time.

Description

Our client is seeking a hands-on AI/ML Engineer with deep expertise in large language models, retrieval-augmented generation (RAG), and cloud-native ML development on AWS. You'll be a key driver in building scalable, intelligent learning systems powered by cutting-edge AI and robust AWS infrastructure.

If you're passionate about combining NLP, deep learning, and real-world applications at scale, this is the role for you.

A minimum of 3 years of specialized experience in AI/ML is required.

Core Skills & Technologies

  • LLM Ecosystem & APIs
  • OpenAI, Anthropic, Cohere
  • Hugging Face Transformers
  • LangChain, LlamaIndex (RAG orchestration)
  • Vector Databases & Indexing
  • FAISS, Pinecone, Weaviate

AWS-Native & ML Tooling

  • Amazon SageMaker (training, deployment, pipelines)
  • AWS Lambda (event-driven workflows)
  • Amazon Bedrock (foundation model access)
  • Amazon S3 (data lakes, model storage)
  • AWS Step Functions (workflow orchestration)
  • AWS API Gateway & IAM (secure ML endpoints)
  • CloudWatch, Athena, DynamoDB (monitoring, analytics, structured storage)

Languages & ML Frameworks

  • Python (primary), PyTorch, TensorFlow
  • NLP, RAG systems, embeddings, and prompt engineering

What You'll Do

Model Development & Tuning

o Designs architecture for complex AI systems and makes strategic technical decisions

o Evaluates and selects appropriate frameworks, techniques, and approaches

o Fine-tune and deploy LLMs and custom models using AWS SageMaker

o Build RAG pipelines with LlamaIndex/LangChain and vector search engines

Scalable AI Infrastructure

The architect distributed model training and inference pipelines on AWS

o Design secure, efficient ML APIs with Lambda, API Gateway, and IAM

Product Integration

o Leads development of novel solutions to challenging problems

o Embed intelligent systems (tutoring agents, recommendation engines) into learning platforms using Bedrock, SageMaker, and AWS-hosted endpoints

• Rapid Experimentation

o Prototype multimodal and few-shot learning workflows using AWS services

o Automate experimentation and A/B testing with Step Functions and SageMaker Pipelines

• Data & Impact Analysis

o Leverage S3, Athena, and CloudWatch to define metrics and continuously optimize AI performance

• Cross-Team Collaboration

o Work closely with educators, designers, and engineers to deliver AI features that enhance student learning

o Mentors junior engineers and provides technical leadership

Who You Are

• Deeply Technical: Strong foundation in machine learning, deep learning, and NLP/LLMs

• AWS-Fluent: Extensive experience with AWS ML services (especially SageMaker, Lambda, and Bedrock)

• Product-Minded: You care about user experience and turning ML into real-world value

• Startup-Savvy: Comfortable with ambiguity, fast iterations, and wearing many hats

• Mission-Aligned: Passionate about education, human learning, and AI for good

Bonus Points

• Hands-on experience fine-tuning LLMs or building agentic systems using AWS

• Open-source contributions in AI/ML or NLP communities

• Familiarity with AWS security best practices (IAM, VPC, private endpoints)

company icon

Global Infovision Private Limited

calendar icon

Today

Senior IT Compliance Engineer

Senior IT Compliance Engineer

Location: Chennai, India

Employment Type: Full time

Role Summary

The Senior IT Compliance & Infrastructure Engineer is a hands on senior engineer who designs, secures, and scales the corporate IT environment-Google Workspace, Okta, Jamf, Slack and other SaaS platforms-while ensuring that controls exceed frameworks such as SOC 2, ISO 27001, and PCI DSS. You will be the connective tissue between Infrastructure, Security, and Compliance, owning the full lifecycle of policies, tooling, audits, and automation that support 1,000+ employees across multiple geographies.

Key Responsibilities

Internal & External Audits

  • Schedule, execute, and document internal controls testing-user activity review, laptop admin access review, asset audits, etc.
  • Provide evidence and SME support for external audits (SOC 2, ISO 27001, PCI DSS) and customer due diligence requests.

Incident & Risk Management

  • Triage and investigate IT compliance/security incidents and DLP alerts; coordinate remediation with stakeholders.
  • Track root causes, document post mortems, and drive continuous control improvements.

Policy & Process Engineering

  • Develop, document, and continuously improve IT policies, runbooks, and KPIs-leveraging AI and automation wherever possible.
  • Champion the adoption of an "AI first" mindset to streamline repetitive tasks and enhance service quality.

Tool Lifecycle Management

  • Lead procurement, renewals, and license expansions for corporate IT SaaS tools.
  • Drive license optimisation and cost control; plan and execute tool sunsets in partnership with business owners.

End-to-End Management of Corporate IT Tools

  • Own day-to-day administration and the strategic roadmap for Google Workspace, Okta, Jamf, Slack, and other Corporate IT tools-covering configuration, capacity planning, compliance hardening, feature adoption, and continuous improvement.

Access Management & Automation

  • Build and maintain automated provisioning/de provisioning with Okta Identity Governance, SCIM, and Workflows.
  • Maintain least privilege models and execute periodic user access and activity reviews.

IT Onboarding & Offboarding

  • Orchestrate end to end onboarding of new joiners, provisioning "birth right" access via both manual and automated workflows.
  • Own the exit process-coordinate with HR, Risk and other teams to revoke all access within defined SLAs.

Implementation & Integration of New Tools

  • Integrate SSO, SCIM, and access request workflows for newly procured tools.
  • Publish self service app bundles in Jamf / Jumpcloud.

Service Ownership & Team Leadership

  • Coach and develop junior engineers and support analysts, nurturing a security first, compliance driven culture rooted in continuous learning and curiosity.

Required Skills & Experience

  1. AI first mindset with demonstrable automation experience (Okta Workflows, Google Apps Script, Zapier, Python, JavaScript).
  2. 4+ years in corporate IT infrastructure, with 3+ years focused on compliance and security.
  3. Minimum 3 years administering Google Workspace and Okta, including advanced SSO/SCIM configurations.
  4. Practical expertise with Okta Identity Governance, Okta Workflows and Okta Device Access.
  5. At least 1 year managing Jamf Pro or an equivalent MDM for macOS/Windows fleets.
  6. Track record implementing and auditing PCI DSS, ISO 27001, and SOC 2 controls across IT systems.
  7. Hands on experience conducting internal reviews (user activity & access) and managing enterprise DLP programs.
  8. Exceptional written & verbal communication skills paired with meticulous documentation abilities; able to translate technical controls for non technical stakeholders and produce clear, audit ready runbooks, diagrams, and knowledge base articles.

Nice to Have

  • Software development experience in JavaScript or Python.
  • Certifications such as Okta Certified Administrator/Consultant, Okta Certified Workflows, Associate Google Workspace Administrator.
  • Familiarity with ITIL/ITSM tooling (Freshservice, Jira Service Management) and CMDB practices.
  • Prior experience in a high growth, global SaaS environment (1000 + endpoints, multi OS).

company icon

Chargebee

calendar icon

Today

Cloud Engineer

We are seeking a skilled and motivated Azure Cloud Engineer with 4+ years of hands-on experience working with Microsoft Azure technologies. The ideal candidate will have a strong background in cloud infrastructure, scripting, and automation, with a focus on designing, deploying, and managing secure, scalable, and resilient cloud solutions.

Key Responsibilities:

Azure Cloud Solutions Development & Management:

  • Design, deploy, and manage Azure-based infrastructure using core services such as Azure Virtual Machines, Azure App Services, Azure Networking, Azure Storage, and Azure Active Directory.
  • Implement high-availability and disaster recovery solutions in Azure environments.

Infrastructure as Code (IaC):

  • Develop and maintain IaC scripts using Terraform and/or Azure Resource Manager (ARM) templates for consistent and repeatable cloud deployments.
  • Collaborate with development and infrastructure teams to define and manage infrastructure blueprints.

Cloud Security & Governance:

  • Apply cloud security best practices including network segmentation, identity and access management, encryption at rest and in transit, and security policy enforcement.
  • Ensure compliance with organizational and regulatory security standards.

Scripting and Automation:

  • Automate routine operational tasks using PowerShell, Python, or Bash.
  • Develop custom scripts to support application deployments, resource provisioning, and monitoring workflows.

CI/CD & DevOps Enablement:

  • Build and maintain CI/CD pipelines using Azure DevOps, integrating build, test, and deployment processes for both infrastructure and application code.
  • Work with development teams to streamline deployment strategies and ensure repeatable and consistent releases.

Monitoring, Logging, and Performance Management:

  • Configure and manage Azure Monitor, Log Analytics, and Application Insights to gain visibility into application and infrastructure performance.
  • Create dashboards and alerts to proactively address performance bottlenecks and system anomalies.

Collaboration and Documentation:

  • Collaborate cross-functionally with developers, architects, security teams, and stakeholders to deliver robust cloud solutions.
  • Maintain thorough documentation of architecture designs, procedures, standards, and configurations.

Preferred Qualifications:

  • Microsoft Certified: Azure Administrator Associate or Azure Solutions Architect.
  • Experience with containerization tools such as Docker and orchestration with Kubernetes (AKS).
  • Familiarity with hybrid cloud environments and on-prem to Azure migrations.

company icon

MAGANTI IT SOLUTIONS PRIVATE LIMITED

calendar icon

Today

Senior Machine Learning Engineer

About Zupee

We are the biggest online gaming company with largest market share in the Indian gaming sector's largest segment - Casual & Boardgame. We make skill-based games that spark joy in the everyday lives of people by engaging, entertaining, and enabling earning while at play.

In the three plus years of existence, Zupee has been on a mission to improve people's lives by boosting their learning ability, skills, and cognitive aptitude through scientifically designed gaming experiences. Zupee presents a timeout from the stressful environments we live in today and sparks joy in the lives of people through its games.

Zupee invests in people and bets on creating excellent user experiences to drive phenomenal growth. We have been running profitable at EBT level since Q3, 2020 while closing Series B funding at $102 million, at a valuation of $600 million. Zupee is all set to transform from a fast-growing startup to a firm contender for the biggest gaming studio in India

ABOUT THE JOB

Role: Senior Machine Learning Engineer

Reports to: Manager- Data Scientist

Location: Gurgaon

Job Summary: We seek a an individual to drive innovation in AI ML-based algorithms and personalized offer experiences. This role will focus on designing and implementing advanced machine learning models, including reinforcement learning techniques like Contextual Bandits, Q-learning, SARSA, and more. By leveraging algorithmic expertise in classical ML and statistical methods, you will develop solutions that optimize pricing strategies, improve customer value, and drive measurable business impact.

Qualifications:

- 3+ years in machine learning, 2+ years in reinforcement learning, recommendation systems, pricing algorithms, pattern recognition, or artificial intelligence.

- Expertise in classical ML techniques (e.g., Classification, Clustering, Regression) using algorithms like XGBoost, Random Forest, SVM, and KMeans, with hands-on experience in RL methods such as Contextual Bandits, Q-learning, SARSA, and Bayesian approaches for pricing optimization.

- Proficiency in handling tabular data, including sparsity, cardinality analysis, standardization, and encoding.

- Proficient in Python and SQL (including Window Functions, Group By, Joins, and Partitioning).

- Experience with ML frameworks and libraries such as scikit-learn, TensorFlow, and PyTorch

- Knowledge of controlled experimentation techniques, including causal A/B testing and multivariate testing.

Key Responsibilities

- Algorithm Development: Conceptualize, design, and implement state-of-the-art ML models for dynamic pricing and personalized recommendations.

- Reinforcement Learning Expertise: Develop and apply RL techniques, including Contextual Bandits, Q-learning, SARSA, and concepts like Thompson Sampling and Bayesian Optimization, to solve pricing and optimization challenges.

-AI Agents for Pricing: Build AI-driven pricing agents that incorporate consumer behavior, demand elasticity, and competitive insights to optimize revenue and conversion.

- Rapid ML Prototyping: Experience in quickly building, testing, and iterating on ML prototypes to validate ideas and refine algorithms.

-Feature Engineering: Engineer large-scale consumer behavioral feature stores to support ML models, ensuring scalability and performance.

-Cross-Functional Collaboration: Work closely with Marketing, Product, and Sales teams to ensure solutions align with strategic objectives and deliver measurable impact.

-Controlled Experiments: Design, analyze, and troubleshoot A/B and multivariate tests to validate the effectiveness of your models.

Required Skills and Experience

UPLIFT MODELING

BAYESIAN OPTIMIZATION

MULTI-ARMED BANDITS

CONTEXTUAL BANDITS

PRICING OPTIMIZATION

REINFORCEMENT LEARNING

company icon

Zupee

calendar icon

Today

Senior Data Engineer

Job Title : Data Engineer

Location : Hyderabad WFO

Experience : 6+

Job Description :

Develop and own efficient SQL and Python code to support data analysis and ELT pipeline development.

Build and maintain data transformation models to transform raw warehouse data into usable formats for analysis and reporting.

Conduct exploratory and deep-dive analysis to identify trends, answer business questions, and support strategic decisions.

Partner with cross-functional teams to understand data needs and communicate analytical outcomes clearly and concisely.

Support experimentation efforts to assess performance and recommend improvements.

Skills Requirements:

Experience in data engineering, analytics engineering, or related fields,

Expertise in SQL, Python, and data transformation frameworks,

Strong experience with cloud-based data warehouses,

Business knowledge and Retail Domain Knowledge,

Experience working with business intelligence tools) and enabling self-serve analytics.

Familiarity with machine learning operations (Mlops) and experimentation tooling is a plus.

company icon

Ingrain Systems Inc

calendar icon

Today

Senior Network Engineer - Cloud & SDN Specialist

Position Overview

We are seeking an exceptional Senior Network Engineer with deep expertise in Software-Defined Networking (SDN) and cloud infrastructure. This role requires a unique blend of advanced networking knowledge and programming skills to architect, implement, and maintain complex cloud networking solutions. The ideal candidate will be proficient in modern networking technologies including OVN, OpenVSwitch, and various tunneling protocols while possessing the coding abilities to automate and optimize network operations.

Key Responsibilities:

(a) Network Architecture & Design

Design and implement scalable cloud network architectures using SDN principles

Architect multi-tenant networking solutions with proper isolation and security controls

Plan and deploy network virtualization strategies for hybrid and multi-cloud environments

Create comprehensive network documentation and architectural diagrams

(b) SDN & Cloud Networking Implementation

Deploy and manage Open Virtual Network (OVN) and OpenVSwitch environments

Configure and optimize virtual networking components including logical switches, routers, and load balancers

Implement network overlays using VXLAN, GRE, and other tunneling protocols

Manage distributed virtual routing and switching in cloud environments

(c) VPN & Connectivity Solutions

Design and implement site-to-site and point-to-point VPN solutions

Configure IPSec, WireGuard, and SSL VPN technologies

Establish secure connectivity between on-premises and cloud environments

Optimize network performance across WAN and internet connections

(d) Programming & Automation

Develop network automation scripts using Python, Go, or similar languages

Create Infrastructure as Code (IaC) solutions using tools like Terraform or Ansible

Build monitoring and alerting systems for network infrastructure

Integrate networking solutions with CI/CD pipelines and DevOps practices

(e) Troubleshooting & Optimization

Perform deep packet analysis and network troubleshooting

Optimize network performance and resolve complex connectivity issues

Monitor network health and implement proactive maintenance strategies

Conduct root cause analysis for network incidents and outages

Required Qualifications

(a) Technical Expertise

5+ years of enterprise networking experience with strong TCP/IP fundamentals

3+ years of hands-on experience with Software-Defined Networking (SDN)

Expert-level knowledge of OVN (Open Virtual Network) and OpenVSwitch preffered

Proficiency in programming languages - Python, or Go required

Deep understanding of network protocols: BGP, OSPF, VXLAN, GRE, IPSec

Experience with cloud platforms: AWS, Azure, GCP, or OpenStack

Strong knowledge of containerization and orchestration (Docker, Kubernetes)

(b) Networking Protocols & Technologies

Layer 2/3 switching and routing protocols

Network Address Translation (NAT) and Port Address Translation (PAT)

Quality of Service (QoS) implementation and traffic shaping

Network security principles and micro-segmentation

Load balancing and high availability networking

DNS, DHCP, and network services management

(c) Cloud & Virtualization

Virtual private clouds (VPC) design and implementation

Hybrid cloud connectivity and network integration

VMware NSX, Cisco ACI, or similar SDN platforms

Container networking (CNI plugins, service mesh)

Network Function Virtualization (NFV)

(d) Programming & Automation Skills

Network automation frameworks (Ansible, Puppet, Chef)

Infrastructure as Code (Terraform, CloudFormation)

API integration and REST/GraphQL proficiency

Version control systems (Git) and collaborative development

Linux system administration and shell scripting

(e) Preferred Qualifications

Bachelor's degree in Computer Science, Network Engineering, or related field

Industry certifications: CCIE, JNCIE, or equivalent expert-level certifications

Experience with network telemetry and observability tools

Knowledge of service mesh technologies (Istio, Linkerd)

Experience with network security tools and intrusion detection systems

Familiarity with agile development methodologies

company icon

Lotus Singapore Group

calendar icon

Today

Frontend Engineer - React - Part Time

Role Description

We are looking for a Frontend Engineer - React for a remote position. The role involves daily tasks such as developing and maintaining the user-facing parts of our web applications using React. The person will ensure that web applications are responsive and provide a seamless user experience. Collaboration with the back-end team and designers to integrate services and enhance site appearance is also expected. Continuous learning and implementing best practices in web development are key to this role.

Qualifications

  • Skills in Front-End Development and Responsive Web Design
  • Basic experience and understanding in Back-End Web Development
  • General skills in Software and Web Development
  • Good communication and teamwork abilities
  • Ability to work independently and remotely
  • Experience with React is a strong plus
  • Current enrollment in a relevant degree program or equivalent educational background
company icon

Quick Compare

calendar icon

Today

Artificial Intelligence Engineer

Experience: Two to five years of shipping production AI or machine-learning systems and scaling data-intensive back ends.

Why this role matters

Terrabase is shaping the next frontier of work AI-an adaptive platform where ambient and specialized agents mesh seamlessly to deliver the one answer that matters, instantly and safely. Think category-defining speed, unwavering accuracy, and enterprise-grade guardrails. Your mission: harden that edge with bulletproof eval loops, unbreakable safety nets, and ruthless performance tuning across our multi-agent engine.

What will you do

  • Own the evaluation loop: Design offline and real-time test harnesses, golden-set datasets, and automated regression dashboards that grade each new agent release on precision, recall, latency, and cost.
  • Harden safety and guardrails: Implement content filters, prompt firewalls, and fallback chains so answers stay compliant with SOC 2 and HIPAA constraints.
  • Optimize prompts and retrieval: Iterate on system, user, and tool prompts for diverse enterprise workflows. Tune ranking models and vector search parameters to lift relevance.
  • Benchmark LLM approaches: Compare open-weight models, hosted APIs, and fine-tuned derivatives. Present trade-off reports that balance performance with budget.
  • Prototype and demo: Build thin, focused proof-of-concepts that show customers new capabilities before we commit to full sprint cycles.
  • Document and share best practices: Write concise run-books, design notes, and post-mortems so the next engineer can reproduce your results without guesswork.
  • Stay current: Track the latest research on retrieval-augmented generation, tool-calling agents, and evaluation methodologies; bring the most practical ideas into production.

What we look for

  • Two to five years of building or operating machine learning or data-intensive back-ends in production.
  • Strong work ethic and bias for ownership. You identify problems, propose fixes, and drive them to closure.
  • Clear, systematic thinker. Your design docs read like thinking in public, and your code structure reflects first principles reasoning.
  • Proficient Python engineer comfortable with type hints, pytest, and modern packaging.
  • Hands-on experience with at least one of: LangChain, LangGraph, or other agent frameworks.
  • Familiarity with vector databases and semantic search fundamentals.
  • Evidence of structured problem solving: could be a design doc, a refactored subsystem, or an open-source pull request.
  • Clear communication and bias for action-you unblock yourself and raise flags early.

Bonus points

  • Prior work with evaluation libraries such as Ragas, LM-Eval, or Intercode.
  • Experience integrating compliance guardrails or red-team testing for Gen-AI systems.
  • Contributions to open-source AI projects or published technical blogs.

Life at Terrabase

We operate as a sharp, humble, fully remote crew that values deep focus and fast feedback. Your code ships to real customers every week, supported by generous GPU budgets and a culture that prizes clear thinking over long meetings.

Terrabase is an equal opportunity employer. We celebrate diversity and are committed to building an inclusive environment for every team member.

company icon

Terrabase

calendar icon

Today

Data Engineer

Sr. Data Engineer

Location: Bangalore / Lucknow

Role Type: Full-time, Senior Level Company'swebsite: LinkedIn link: UBI LinkedIn

About UsefulBI:

UsefulBI is a leadingAI-driven data solutionsprovider specializing in data engineering, cloud

transformations, and AI-powered analytics for Fortune 500 companies. We help businesses turn complex data into actionable insights through our innovative products and services.

Overview:

UsefulBI is lookingfor highly skilledcandidates, with expertise in generating power business insightsfrom very large datasets, in this space where the primary aim would be to enable needle moving business impacts through cutting edge statistical analysis.

We are looking for passionate data engineers who can envision the design and development of analytical infrastructure which can support strategicand tactical decision-making. The candidate should be well-versed in statistics, R and SAS languages, machine learning techniques, mathematics, and SQL databases.

Experience Required:

  • Minimum 5+ years' Experience in Data Engineering.
  • Must have good knowledge and experience in Python.
  • Must have good Knowledge of Pyspark.
  • Must have good Knowledge of Databricks.
  • Must have good experience in AWS/Azure.
  • Typically requires relevantanalysis work and domain-area work experience.
  • Expert in the management, manipulation, and analysis of very largedatasets.

Key Responsibilities:

  • Create and maintainoptimal data pipelinearchitecture.
  • Assemble large, complexdata sets that meet functional / non-functional businessrequirements.
  • Identify, design, and implement internalprocess improvements: automating manual processes, etc.
  • Build the infrastructure required for optimalextraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies.
  • Build analytics tools that utilizethe data pipelineto provide actionable insights into customeracquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assistwith data-related technical issues and support their data infrastructure needs.

company icon

UsefulBI Corporation

calendar icon

Today

Automation Engineer

About Company:

Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations.

The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services.

Job Title: Automation Engineer

Location: Pan India

Experience: 6+ years

Employment Type: Contract to hire

Work Mode: Hybrid

Notice Period: - Immediate joiners

Job Description:

Primary Skill: Power shell scripting

Secondary skill: MS products, like Power Platform, Azure, Exchange, SharePoint etc

• Responsible for designing, developing, implementing, and maintaining custom low-code business solutions using Microsoft's Power Platform suite and SharePoint Online. This role supports the Microsoft 365 environment.

• Assist in the configuration and deployment of a full complement of Microsoft operating systems infrastructure, to include Windows Server, Active Directory, Windows Workstation, and complimentary services and software.

• Analyze customer requirements and provides solutions to a variety of technical problems of varying degrees of complexity

• Responsibility for engineering designs, development, and implementation of domain policies, hardening, and related procedures related to core infrastructure

• Responsible for performance and availability monitoring, providing appropriate access controls, hardware and software installation, as well as configuration and maintenance of all Windows-related technologies

• Perform maintenance and upgrade of the following computing environments: Windows Server 2022/2019, Windows 10/11.

• You will be a proactive self-starter requiring minimal supervision who can monitor existing infrastructure, identify and coordinate implementation changes to address problems or engineering changes with little to no impact to the end users.

• The engineer will collaborate with stakeholders to understand business requirements and leverage their expertise in backend administration and solution development to create and manage automation solutions.

• Additionally, the role includes responsibilities for testing, deployment, and documentation, as well as serving as a backup administrator for SharePoint, OneDrive, and Teams.

company icon

People Prime Worldwide

calendar icon

Today

Senior Development Engineer (UI)

Company

Founded in 1998, we develop and support the Actran software suite for acoustic simulation as part of Hexagon Manufacturing Intelligence (). Leading automotive OEMs and suppliers, aircraft manufacturers, engine developers, audio system designers, and others use our technology to virtually improve the acoustic performance of their products through advanced simulation.

Actran is used by hundreds of companies worldwide including Airbus, Boeing, Safran, Rolls Royce, Renault, BMW, Ferrari, Toyota, Honda, Volvo, Bose, Microsoft, Panasonic and many more.

Actran is the premier acoustics software to solve acoustics, vibro-acoustics, and aero-acoustics problems. Used by automotive manufacturers and suppliers, aerospace and defense companies, and consumer product manufacturers, Actran helps engineers better understand and improve the acoustics performance of their designs.

Function & Responsibilities

As acoustic development engineer, you will work in the Product Development team that is responsible of the Actran software development. Please watch the short intro video on Actran here.

Your main responsibilities will include:

Development of new features in Actran, matching with industrial expectations (accuracy, performance, robustness);

Participation to acoustic research topics

Recommendations on new technologies to be integrated in Actran to solve new challenges efficiently;

Interfaces with third party software when required;

Work on bug fixes;

Identify software design problems and devise elegant solutions.

Quality Assurance (QA), industrial validation and software documentation benefit from daily interactions with dedicated teams.

Profile

PhD in Applied Sciences, Computer Sciences (or equivalent by experience)

Programming skills in Python and C++

Experience with a commercial structural dynamics solver (Nastran, Abaqus, Ansys, Optistruct)

Experience in programming on Linux environment

Experience in acoustic research,

Some experience in the design of complex object-oriented software :

o C++: generic programming, Standard Template Library, boost libraries;

o Python: C-bindings, python extension libraries, numpy, scipy, matplotlib.

o Familiar with versioning system GIT, CI/CD development processes and containerization tools

o Experience in Qt framework and VTK library is a plus

o Basic knowledge of CAE-FEM tools (Ansa, Catia, Hypermesh) is a plus

Soft skills including being creative, self-learner autonomous, curious, capable of thinking out of the box, solution-oriented attitude, quality awareness, team spirit and flexibility

With good level of English.

What we offer

Besides an attractive salary package, the company provides a young and dynamic work environment. The average age of the team is below 40 and is mostly engineers and PhD's. The development team is fully based in Europe (Belgium and France)

Hexagon welcomes new talents and invests in their development. This results in a very creative and inspiring atmosphere which is influenced by every single individual, including you! Innovation, excellence, motivation and passion are among the most important values that our employees share.

company icon

Hexagon Manufacturing Intelligence

calendar icon

Today

Senior Data Engineer

About us

One team. Global challenges. Infinite opportunities. At Viasat, we're on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We're looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team.

What you'll do

You are a capable, self-motivated data engineer, proficient in software development methods, including Agile/Scrum. You will be a member of the data engineering team, working on tasks ranging from the design, development, and operations of data warehouses to data platform functions. We enjoy working closely with each other, utilizing an agile development methodology. Priorities can change quickly, but our team members can stay ahead to delight every one of our customers, whether they are internal or external to Viasat.

The day-to-day

  • Strong programming experience using Python.
  • Proven track record with 5+ years of experience as a data engineer or experience working on data engineering projects/platforms.
  • Working experience with data pipelines & methodologies. Experience with SQL and a wide variety of databases, like PostgreSQL.
  • Good knowledge & experience in distributed computing frameworks like Spark
  • Good experience with source code management systems like GIT
  • Capable of tuning databases, and SQL queries to meet performance objectives.
  • Bachelor's degree in computer science, computer engineering, or electrical engineering or equivalent technical background and experience
  • Embracing the DevOps philosophy of product development, in addition to your design and development activities, you are also required to provide operational support for the post-production deployment.

What you'll need

  • Experience Requirement: 5+ years
  • Education Requirement: Bachelor's degree
  • Travel Requirement: Up to 10%

What will help you on the job

  • Experience with cloud providers like AWS, containerization, and container orchestration frameworks like Kubernetes is preferred.
  • Working experience with data warehouses & ETL tools.
  • Capable of debugging sophisticated issues across various ETL platforms, and databases.
  • Experience with DevOps and tools such as Jenkins, and Ansible is an advantage.
  • Experience with small- to mid-sized software development projects. Experience with Agile Scrum is a plus.
  • Understanding of routing, switching, and basic network communication protocol equal opportunity based on

company icon

Viasat

calendar icon

Today

Full-Stack Engineer (Frontend-Focused)

Full-Stack Engineer (Frontend-Focused)

Hybrid preferred (Ahmedabad) Contract role Immediate Start

About Routed AI

We're building the career intelligence engine of the future where users don't just get career advice, they simulate, plan, and act on it. Think real-time agents, dynamic career maps, and decision-making powered by data from 1B+ profiles, jobs, and transitions.

Built by NYU and IBM alumni, Routed AI is already in the hands of hundreds of users. We're now hiring a founding engineer who's as fast as they are thoughtful - someone who wants to build product, not just push tickets.

What You'll Do

You'll take ownership of the frontend experience and build fast, functional, and delightful features end-to-end.

  • Develop and optimize dynamic interfaces using React + TypeScript
  • Improve app performance through smart caching, parallel API calls, and responsive data handling
  • Integrate AI agent responses into frontend workflows - including chat agents and intelligent suggestion systems
  • Collaborate with our backend (Python + FastAPI) to wire up APIs and ensure smooth data flow
  • Identify and solve latency issues, UI glitches, and product bottlenecks - quickly and proactively
  • Think in systems - caring about product experience just as much as performance

What We're Looking For
  • Strong hands-on experience with React, TypeScript, and modern frontend architecture
  • Comfort with API integration, caching, and optimizing for real-world performance
  • Some backend experience (Python/FastAPI or similar) - enough to debug, build, or improve
  • Bonus: Experience integrating AI/LLM-based tools into web applications
  • Bonus: Based in or near Ahmedabad (hybrid preferred)

You don't need to know everything - but you should be fast, thoughtful, and eager to solve real problems.

Why Join Us
  • Real ownership: Your work will go live fast and impact users directly
  • Work closely with experienced, technical founders
  • Paid contract role with long-term potential - if we work well together, we'll grow with you
  • Get in early on a bold, useful product backed by real usage and momentum

How to Apply

Email with:

  • Your resume and LinkedIn
  • A few short lines on why you want to work with us
  • Your GitHub or portfolio
  • (Optional bonus): Links or screenshots of products you've built - especially things you're proud of

company icon

Routed AI

calendar icon

Today

AWS Engineer

Role Description

This is a full-time on-site role for an AWS Engineer located in Trivandrum. The AWS Engineer will be responsible for software development, infrastructure management, cloud computing, Linux administration, and database maintenance.

Must Have

-Minimum 5 years of hands-on experience In the AWS cloud at least with S3, EC2, MSK, Glue, DMS and Sage maker.

-Bachelor's degree in Computer Science or related field should have Development/work experience in Python, Docker & containerizing.

-Should be troubleshooting the problem, reviewing the design, and coding the solution.

- AWS-certified candidate is preferred

Qualifications

  • Software Development skills
  • Infrastructure and Cloud Computing expertise
  • Linux and Database administration experience
  • Strong problem-solving and analytical skills
  • AWS certification is a plus
company icon

Fincita Consulting Inc

calendar icon

Today

Cloud Engineer II T5

About McDonald's:

One of the world's largest employers with locations in more than 100 countries, McDonald's Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe.

Cloud Engineer II

Full-time

McDonald's Office Location: Hyderabad

Global Grade: G3

Job Description:

This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald's works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It's our goal to always provide an engaging, relevant, and simple experience for our customers.

The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services.

This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment.

Responsibilities & Accountabilities:

  • Participate in the management, design, and solutioning of platform deployment and operational processes.
  • Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support.
  • Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code.
  • Proactively identify opportunities for continuous improvement
  • Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies.
  • Develop and maintain infrastructure and tools that support the software development and deployment process.
  • Automate the software development and deployment process.
  • Monitor and troubleshoot the software delivery process.
  • Work with software developers and operations engineers to improve the software delivery process.
  • Stay up to date on the latest DevOps practices and technologies.
  • Drive proof of concepts and conduct technical feasibility studies for business requirements.
  • Strive to provide internal and external customers with excellent customer service and world-class service.
  • Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams.
  • Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management.
  • Implement and support monitoring best practices
  • Respond to platform and operational incidents and effectively troubleshoot and resolve issues
  • Work well in an agile environment

Qualifications:

  • Bachelor's degree in computer science or a related field or relevant experience.
  • 5+ years of Information Technology experience for a large technology company, preferably in a platform team.
  • 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts.
  • 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP.
  • 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools.
  • 3+ years of application development using agile methodology.
  • Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger)
  • Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.)
  • Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology.
  • It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk.
  • Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc.
  • Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams.
  • Knowledge of foundational cloud security principles
  • Excellent problem-solving and analytical skills
  • Strong communication and partnership skills
  • Any GCP Certification.
  • Any Agile certification preferably scaled agile.
company icon

McDonald's

calendar icon

Today

Senior Software Engineer

Senior Software Engineer

Bengaluru, IND Technology - Application Development / Full Time / hybrid

WHO WE ARE

Genesis transforms application development in financial markets by offering a low-code platform that supercharges developers and enables organizations to build performant, secure applications with unmatched speed, efficiency, and scale.

We have the vigor and passion of a startup and the skill and experience of a scale-up, consistently refining and exploring ways to make work better for everyone.

To help us achieve our vision of reinventing the way financial market organizations build software, we are looking for people who aren't afraid to challenge the status quo - people who are passionate about change.

If you are a self-starter with a solution-orientated mindset, you'll find a home at Genesis.

WHAT YOU NEED & THE HATS YOU WILL WEAR

You will be an experienced Java Backend/ Full Stack Developer to join our Bangalore Team. Because the Genesis Low Code Platform brings together the best of high-performance back end and web technologies to deliver 80% reductions in time to market for development teams you will need to be experienced enough to help design and build applications and solutions using the Genesis Platform.

WHERE WE SEE THIS ROLE GOING

  • While we see this role as a Java expert who will design, implement, test and maintain products built using Genesis Framework, we are also looking for someone who has a clear willingness to be adaptable and flex to new ways of working as this will be critical to remain agile and innovative
  • You will be a strong collaborator and as you work through the full life cycle of software development working closely with product owners, UI/UX developers, and core engineers in an iterative process. This will allow you to build knowledge of our business, services, and client needs
  • As we work to tight deadlines you will be used to setting project milestones and be responsible for keeping track of progress contributing to wider project success, and deliver software releases on time and with the expected capabilities and quality

REAL WORLD COMPETENCIES THAT MATCH OUR VALUES

  • Knowledge seeker: You will have at least 6 years of Java development experience but be constantly looking to expand your knowledge in areas in and outside of your direct remit. Knowledge of front end and/or back-end Java development is crucial.
  • Collaborative Influencer: Someone who is always looking to meet and discuss how things can be improved or how we can work better, whilst bringing others along with you.
  • Fearless Technical Expert: You will have high competence in at least two of JavaScript, TypeScript, Angular, Vue, React as well as CSS/SaSS, WPF NoSQL database systems (MongoDB, Aerospike, etc.) Relational databases (e.g. Oracle, MS SQL, Postgress).

BENEFITS WE HELP OUR PEOPLE THRIVE

At Genesis, we recognize that taking care means looking after the whole self, both at and awayfrom the office. We are committed to enhancing the well-being of our team members through flexible, individualized benefits.

Competitive salary and a stake in the company's success through a defined bonus

18 days per year plus public holidays.

Top-level covers private medical healthcare insurance covering Dependents

A healthy remote working allowance to help set up your home office

A dedicated training allowance with access to a great portfolio of training providers

An annual wellbeing allowance to spend on anything that will benefit your mental and/or physical wellbeing

Hybrid Working - We encourage you to work from the office a minimum of one or two days per week

A stake in our success through our Employee Equity Scheme

company icon

Genesis Global

calendar icon

Today

Cyber Security Engineer

Our client is founded in 2002 with offices in the US, India, Europe, Canada, Singapore, Costa Rica, Brazil, and the UK they got national and international scope and reach, backed by decades of experience and deep domain expertise. They specialize in Products such as AI Governance/Data Privacy and Services such as Interactive (Product, Discovery, Research, User Journey, Prototyping), Talent, Cloud (Development, Transformation, SRE, Architecture), Engineering (Web, Mobile, Strategy), Enterprise (Salesforce, ServiceNow, SAP, Oracle, Microsoft, Workday), Training (Corporate Learning Design and Development), and building offshore cost-effective captive Global Capability Centers.

Required Qualifications

• Relevant 3 to 5 years of experience in cybersecurity engineering, with deep expertise in Splunk, SIEM, SOAR, ML, and automated data pipelines.

• 3+ years of experience with security automation platforms (SOAR) such as Splunk SOAR, XSOAR, Swimlane, etc.

• 3+ years of experience in cyber data engineering or analytics, including log processing and data pipeline architecture.

• Strong proficiency in Python, PowerShell, and API integrations.

• Proven experience with GitLab, automation platform deployment, and pipeline troubleshooting.

• Hands-on experience with ETL tools, relational and columnar databases, and data visualization tools such as Power BI.

• Solid understanding of SIEM design, normalization, and correlation strategies.

• Excellent debugging, problem-solving, and communication skills.

• Bachelor's degree in Computer Science, Engineering, Cybersecurity, or equivalent technical field (or 10+ years of experience).

Preferred Qualifications

• Hands-on experience with cloud environments such as AWS, Azure, or GCP.

• Strong knowledge of cloud-native security technologies, serverless architecture, and containerized data flows.

• Cybersecurity certifications such as CISSP, CISM, CISA, or equivalent.

• Experience working in Agile or DevSecOps environments with CI/CD pipelines.

• Familiarity with corporate change management practices and IT governance frameworks.

company icon

People Prime Worldwide

calendar icon

Today

Platform Engineer II - Enablement Services Support T9

About McDonald's:

One of the world's largest employers with locations in more than 100 countries, McDonald's Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe.

Position Summary:

We're seeking a hands-on Platform Engineer to support our enterprise data integration and enablement platform. As a Platform Engineer II, you'll be responsible for designing, maintaining, and optimizing secure and scalable data movement services-such as batch processing, file transfers, and data orchestration. This role is essential to ensuring reliable data flow across systems to power analytics, reporting, and platform services in a cloud-native environment.

Who we're looking for:

Primary Responsibilities:

  • Hands-On Data Integration Engineering
  • Build and maintain data transfer pipelines, file ingestion processes, and batch workflows for internal and external data sources.
  • Configure and manage platform components that enable secure, auditable, and resilient data movement.
  • Automate routine data processing tasks to improve reliability and reduce manual intervention.
  • Platform Operations & Monitoring
  • Monitor platform services for performance, availability, and failures; respond quickly to disruptions.
  • Tune system parameters and job schedules to improve throughput and processing efficiency.
  • Implement logging, metrics, and alerting to ensure end-to-end observability of data workflows.
  • Security, Compliance & Support
  • Apply secure protocols and encryption standards to data transfer processes (e.g., SFTP, HTTPS, GCS/AWS).
  • Support compliance with internal controls and external regulations (e.g., GDPR, SOC2, PCI).
  • Collaborate with security and infrastructure teams to manage access controls, service patches, and incident response.
  • Troubleshooting & Documentation
  • Investigate and resolve issues related to data processing failures, delays, or quality anomalies.
  • Document system workflows, configurations, and troubleshooting runbooks for team use.
  • Provide support for platform users and participate in on-call rotations as needed.

Skill:

  • 3+ years of hands-on experience in data integration, platform engineering, or infrastructure operations.
  • Proficiency in:
  • Designing and supporting batch and file-based data transfers
  • Python scripting and SQL for diagnostics, data movement, and automation
  • Terraform scripting and deploying of infrastructure cloud services
  • Working with GCP (preferred) or AWS data analytics services, such as:
  • GCP: Cloud Storage, BigQuery, Cloud Composer, Pub/Sub, Dataflow
  • AWS: S3, Glue, Redshift, Athena, Lambda, EventBridge, Step Functions
  • Cloud-native storage and compute optimization for data movement and processing
  • Infrastructure-as-code and CI/CD practices (e.g., Terraform, Ansible, Cloud Build, GitHub Actions)
  • Strong analytical and debugging skills for troubleshooting issues in distributed, high-volume environments.
  • Bachelor's degree in computer science, Information Systems, or a related technical field.

Work location: Hyderabad, India

Work pattern: Full time role.

Work mode: Hybrid.

Additional Information:

McDonald's is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald's provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

McDonald's Capability Center India Private Limited ("McDonald's in India") is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald's in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald's in India does not discriminate based on race, religion, color, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws.

Nothing in this job posting or description should be construed as an offer or guarantee of employment.

company icon

McDonald's

calendar icon

Today

Remote Fullstack Engineer - 17853

Join a leading U.S.-based client as a Full-Stack Engineer, where you will play a key role in developing AI-driven solutions for commercial and research applications. This role is ideal for innovative problem-solvers who enjoy building scalable applications and collaborating with top experts in AI and software development. You will work with diverse companies to create cutting-edge technologies, shaping the future of intelligent systems.

Job Responsibilities:

  • Develop scalable solutions using Python and JavaScript/TypeScript.
  • Collaborate with stakeholders to align technical solutions with business goals.
  • Implement efficient algorithms and scripts for seamless user interactions.
  • Troubleshoot issues, document problems, and provide effective solutions.
  • Work closely with researchers to understand requirements and deliver insights.

Job Requirements:

  • Bachelor's or Master's degree in Engineering, Computer Science, or a related field.
  • Strong understanding of software engineering concepts.
  • Proficiency in Python, JavaScript, and TypeScript.
  • Excellent problem-solving and communication skills in English.

Why Join Us?

  • Work with top global experts and expand your professional network.
  • This is a contractual remote work opportunity without traditional job constraints.
  • Competitive salary aligned with international standards.
  • Contribute to cutting-edge AI projects shaping the future of technology.

Selection Process:

  • Shortlisted developers may be asked to complete an assessment.
  • If you clear the assessment, you will be contacted for contract assignments with expected start dates, durations, and end dates.
  • Some contract assignments require fixed weekly hours, averaging 20/30/40 hours per week for the duration of the contract assignment.
company icon

Turing

calendar icon

Today

Data Engineer III T5

About McDonald's:

One of the world's largest employers with locations in more than 100 countries, McDonald's Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe.

Position Summary:

Looking to hire a Data Engineer at the G4 level who has a deep understanding of Data Product Lifecycle, Standards and Practices. Will be responsible for building scalable and efficient data solutions to support the Brand Marketing / Menu function with a specific focus on the Menu Data product and initiatives. As a Data Engineer, you will collaborate with data scientists, analysts, and other cross-functional teams to ensure the availability, reliability, and performance of data systems. Leads initiatives to enable trusted Menu data, supports decision-making, and partners with business and technology teams to deliver scalable data solutions that drive insights into menu performance, customer preferences, and marketing effectiveness. Expertise in cloud computing platforms, technologies and data engineering best practices will play a crucial role within this domain.

Who we're looking for:

Primary Responsibilities:

  • Builds and maintains relevant and reliable Menu data products that support menu and marketing Analytics. Develops and implements new technology solutions as needed to ensure ongoing improvement with data reliability and observability in-view.
  • Participates in new software development engineering and Lead data engineering initiatives supporting Product Mix Analytics, ensuring timely and accurate delivery of marketing and menu-related products.
  • Work closely with the Product owner and help to define business rules that determines the quality of Menu datasets.
  • Drive and implement best practices for pipeline development, data governance, data security and quality across marketing and menu-related datasets.
  • Ensure scalability, maintainability, and quality of data systems powering menu item tracking, promotion data, and marketing analytics.
  • Staying up to date with emerging data engineering technologies, trends, and best practices, and evaluating their applicability to meet evolving Product Mix analytics needs.
  • Documenting data engineering processes, workflows, and solutions for knowledge sharing and future reference.
  • Mentor and coach junior data engineers, particularly in areas related to menu item tracking, promotion data, and marketing analytics.
  • Ability and flexibility to coordinate and work with teams distributed across time zones, as needed

Skill:

  • Leads teams to drive scalable data engineering practices and technical excellence within the Menu Data ecosystem.
  • Bachelor's or master's degree in computer science or related engineering field and deep experience with Cloud computing
  • 5+ years of professional experience in data engineering or related fields
  • Proficiency in Python, Java, or Scala for data processing and automation
  • Hands-on experience with data orchestration tools (e.g., Apache Airflow, Luigi) and big data ecosystems (e.g., Hadoop, Spark, NoSQL)
  • Expert knowledge of Data quality functions like cleansing, standardization, parsing, de-duplication, mapping, hierarchy management, etc.
  • Ability to perform extensive data analysis (comparing multiple datasets) using a variety of tools
  • Proven ability to mentor team members and lead technical initiatives across multiple workstreams
  • Effective communication and stakeholder management skills to drive alignment and adoption of data engineering standards
  • Demonstrated experience in data management & data governance capabilities
  • Familiarity with data warehousing principles and best practices.
  • Excellent problem solver - use of data and technology to solve problems or answer complex data related questions
  • Excellent collaboration skills to work effectively in cross-functional teams.

Work location: Hyderabad, India

Work pattern: Full time role.

Work mode: Hybrid.

company icon

McDonald's

calendar icon

Today

Senior Security Engineer

Job description

As a Security Engineer - VAPT, you will be responsible for conducting comprehensive security assessments, identifying vulnerabilities, and implementing effective remediation strategies. Leveraging your expertise in penetration testing and ethical hacking, you will play a key role in enhancing the security posture of our clients' systems and networks. This position offers an exciting opportunity to work on challenging projects, collaborate with talented professionals, and contribute to the advancement of cybersecurity practices.

Key Responsibilities:

  • Perform end-to-end Vulnerability Assessment and Penetration Testing (VAPT) for clients' IT infrastructure, applications, and networks.
  • Conduct thorough security assessments using industry-standard tools and methodologies, including but not limited to, Nmap, Nessus, Metasploit, Burp Suite, and OWASP.
  • Identify and exploit security vulnerabilities to assess the potential impact on clients' systems and data.
  • Prepare detailed assessment reports outlining findings, risk levels, and recommended remediation measures.
  • Collaborate with clients' IT teams to prioritize and address identified security issues in a timely manner.
  • Develop and implement custom scripts or tools to enhance testing capabilities and automate repetitive tasks.
  • Stay abreast of emerging security threats, vulnerabilities, and industry best practices to continually improve testing methodologies.
  • Provide guidance and mentorship to junior security engineers, fostering a culture of knowledge sharing and skill development within the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in cybersecurity, with a focus on Vulnerability Assessment and Penetration Testing.
  • Proficiency in using tools such as Nmap, Nessus, Metasploit, Burp Suite, and OWASP.
  • Hands-on experience with various operating systems, including Windows, Linux, and Unix.
  • Strong understanding of network protocols, web application architecture, and common security vulnerabilities.
  • Certified Ethical Hacker (CEH), Offensive Security Certified Professional (OSCP), or similar certifications preferred.
  • Excellent analytical skills and attention to detail, with the ability to prioritize and manage multiple tasks effectively.
  • Effective communication skills, both verbal and written, with the ability to convey technical concepts to non-technical stakeholders.
  • Proven track record of delivering high-quality security assessments and actionable recommendations

company icon

TAC Security

calendar icon

Today

Senior Software Engineer (Python with ML)

Who are we?

Securin is a leading product based company backed up by services in the cybersecurity domain, helping hundreds of customers world wide gain resilience against emerging threats. Our products are powered by accurate vulnerability intelligence, human expertise, and automation, enabling enterprises to make crucial security decisions to manage their expanding attack surfaces.

Securin is built on the foundation of in-depth penetration testing and vulnerability research to help organizations continuously improve their security posture. Our team of intelligence experts is one of the best in the industry and our comprehensive portfolio of tech-enabled solutions include Attack Surface Management (ASM), Vulnerability Intelligence (VI), Penetration Testing, and Vulnerability Management. These solutions allow our customers to gain complete visibility of their attack surfaces, stay informed of the latest security threats. Also, trends, and proactively address risks.

What do we promise?

We are a highly effective tech-enabled cybersecurity solutions provider and promise continual security posture improvement, enhanced attack surface visibility, and proactive prioritised remediation for every one of our client businesses.

What do we deliver?

Securin helps organizations to identify and remediate the most dangerous exposures, vulnerabilities, and risks in their environment. We deliver predictive and definitive intelligence and facilitate proactive remediation to help organizations stay a step ahead of attackers.

By utilising our cybersecurity solutions, our clients can have a proactive and holistic view of their security posture and protect their assets from even the most advanced and dynamic attacks.

Securin has been recognized by national and international organizations for its role in accelerating innovation in offensive and proactive security. Our combination of domain expertise, cutting-edge technology, and advanced tech-enabled cybersecurity solutions has made Securin a leader in the industry.

Job Location : IIT Madras Research Park, A block, Third floor, 32, Tharamani, Chennai, Tamil Nadu 600113

Work Mode: Hybrid, Work from office - Chennai, 2 days a week

Compensation : Up to 30LPA

Job Title: Senior Software Engineer (With Machine Learning Experience)

Job Description:

We are seeking a skilled and motivated Python Engineer with 5+ years of professional experience, including at least 2 years of hands-on experience in Machine Learning (ML). The ideal candidate will possess strong Python development skills, a deep understanding of object-oriented programming (OOP), and practical experience with NoSQL databases, especially MongoDB. Familiarity with cloud platforms such as AWS, GCP, or Azure is also required.

This role is perfect for a developer who is not only proficient in backend engineering but also enthusiastic about applying ML concepts in real-world applications. You'll work closely with cross-functional teams to develop and optimize scalable, reliable, and maintainable Python-based solutions.

Responsibilities :

  • Design, develop, and maintain Python applications with a focus on performance and scalability.
  • Design systems with non-linear time complexity and efficient space usage across compute and storage. Ensure stateless, idempotent request processing with no in-memory state. Model schemas for future evolution, supporting increasing data volume and structural changes.
  • Build and operate cloud-based SaaS applications with a focus on production reliability. Design includes not only functional code but also integrated monitoring, alerting, and health checks to ensure observability and operational excellence in a multi-tenant environment.
  • Apply object-oriented programming (OOP) principles to craft reusable, modular code.
  • Develop, implement, and optimize machine learning models in production environments.
  • Leverage NoSQL databases like MongoDB for efficient data storage and retrieval.
  • Work with cloud platforms (AWS, GCP, Azure) for application deployment and data services.
  • Write and maintain robust unit and integration tests using test-driven development (TDD) practices.
  • Participate in the full software development lifecycle - from requirements gathering to deployment.
  • Collaborate with cross-functional teams, participate in Agile ceremonies, and contribute to technical discussions.
  • Engage in code reviews and mentor junior team members where appropriate.
  • Stay updated with emerging trends in Python development, machine learning, and software engineering best practices.

Requirements:

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 5+ years of professional experience in Python development.
  • At least 2 years of hands-on experience with Machine Learning (model development, evaluation, and deployment).
  • Strong understanding of OOP principles and real-world software design.
  • Experience working with NoSQL databases, particularly MongoDB.
  • Familiarity with TDD practices and writing unit tests.
  • Practical experience with cloud platforms (AWS, GCP, or Azure).
  • Proficiency with version control systems such as Git.
  • Excellent problem-solving and debugging skills.
  • Strong communication and teamwork abilities.
  • A proactive, self-motivated attitude with a passion for continuous learning.

Preferred Qualifications:

  • Hands-on experience in AI concepts including LLMs, prompt engineering, or traditional AI.
  • Strong grasp of supervised, unsupervised, and reinforcement learning with practical experience in key ML algorithms (e.g., regression, SVMs, neural networks, clustering). Proficient with ML frameworks like scikit-learn, TensorFlow, PyTorch, and XGBoost. Solid foundation in math (linear algebra, calculus, probability, statistics) and understanding of optimization and loss functions. Experience with model serving using Flask, FastAPI, or TensorFlow Serving.
  • Experience with Python ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar.
  • Knowledge of Agile/Scrum methodologies and collaborative development workflows.

What We Offer:

  • A collaborative and innovative team environment.
  • Opportunities to work on AI/ML-powered products and projects.
  • Ongoing learning and career development opportunities.
  • A dynamic culture focused on growth, curiosity, and problem-solving.

If you're a Python developer with a strong foundation and a growing passion for machine learning, we'd love to hear from you!

Why should we connect?

We are a bunch of passionate cybersecurity professionals who are building a culture of security. Today, cybersecurity is no more a luxury but a necessity with a global market value of $150 billion.

At Securin, we live by a people-first approach. We firmly believe that our employees should enjoy what they do. For our employees, we provide a hybrid work environment with competitive best-in-industry pay, while providing them with an environment to learn, thrive, and grow. Our hybrid working environment allows employees to work from the comfort of their homes or the office if they choose to.

For the right candidate, this will feel like your second home. If you are passionate about cybersecurity just as we are, we would love to connect and share ideas.

company icon

Securin Inc.

calendar icon

Today

Principal Software Engineer (JavaScript)

Job Type: Full-Time

Location: Bangalore

️ Experience Required: 8-12 years in software engineering, including 5+ years in enterprise SaaS

Hiring Process: Resume review Intro call Technical interviews Offer

About the Company

Saltmine, a rapidly growing technology firm transforming the $300B Commercial Real Estate industry is hiring for a Principal Engineer. Our SaaS platform revolutionizes workplace design, strategy, and procurement through data-driven innovation. With global operations across the US, Europe, and Asia, we are backed by top venture capital firms.

Position Summary

We're looking for a Principal Engineer to lead technically challenging, business-critical projects with direct revenue impact. In this high-autonomy role, you'll drive architectural decisions, DevOps strategy, and AI-enabled automation, while mentoring engineers and owning technical excellence across the board.

Key Responsibilities

Technical Leadership:

  • Architect and implement scalable, enterprise-grade solutions with measurable revenue impact
  • Lead development of innovative features solving real-world customer challenges
  • Establish engineering best practices, coding standards, and architectural principles

️ DevOps & Infrastructure Ownership:

  • Champion DevOps culture across the organization, focusing on CI/CD, infrastructure-as-code, and automation
  • Design and evolve cloud infrastructure with a focus on performance, availability, and cost-efficiency
  • Implement and oversee robust monitoring, logging, and incident response processes

AI & Automation Integration:

  • Drive adoption of AI capabilities to enhance both the product and engineering workflows
  • Build and deploy automation frameworks that improve system scalability and resilience
  • Leverage AI tools and models to solve complex technical and operational challenges

Required Qualifications

  • 8-15 years of hands-on software engineering experience, with 5+ years in enterprise SaaS environments
  • Deep DevOps expertise - including CI/CD, Terraform or similar IaC, and cloud-native architecture
  • Proven backend development experience using Node.js, TypeScript, and microservices architecture
  • Experience deploying and integrating AI/ML capabilities in production systems
  • Strong track record in incident management and cross-functional resolution
  • Customer-first mindset with the ability to translate business needs into technical solutions
  • Demonstrated ability to deliver high-impact, revenue-driven engineering initiatives.
company icon

Saltmine

calendar icon

Today

Senior Software Engineer - React Native (3 to 6 years)

We are excited to announce that we are expanding our team in India to support TransDyne, a large healthcare technology and medical transcription company based out of the US. TransDyne is dedicated to providing innovative solutions that help healthcare providers deliver better patient care.

As a member of our development team, you will have the opportunity to work on cutting-edge projects that make a real difference in people's lives. Our company values collaboration, creativity, and a passion for excellence. We believe in creating a work environment that fosters personal and professional growth, and we offer competitive compensation packages and excellent benefits.

We are seeking talented and experienced developers to join our team. As a candidate with 3 to 6 years of experience, you should have a strong foundation in software development principles and practices. You should be proficient in at least one programming language and have experience working on web-based or mobile applications.

We are looking for individuals who are passionate about technology, have a strong work ethic, and are committed to delivering exceptional results. You should be able to work independently and as part of a team and have excellent communication skills. We offer a collaborative work environment where you will have the opportunity to work with other talented developers and learn from experienced industry professionals.

If you are looking for a challenging and rewarding career in healthcare technology, we invite you to join our offshore development team. Together, we can make a real difference in the lives of patients and healthcare providers around the world.

Skills Required:

  • Strong expertise in React Native with a thorough understanding of it's core principles
  • Translate designs and wireframes into high-quality code, while creating reusable components
  • Strong proficiency in JavaScript, HTML, CSS, RESTful APIs, modern front-end development tools and practices
  • Experience with React JS is a huge plus
  • Knowledge of native Android and iOS development is highly desirable
  • Independent, good problem solver - able to debug and fix issues quickly and efficiently.
  • This requires strong analytical skills and the ability to think creatively to find solutions.
  • Bachelor's degree in Computer Science or a related field
company icon

TransDyne IT Services

calendar icon

Today

Machine Learning Engineer

About Company:

Our client organization's mission is to empower people to participate in global conversations through communities. They are responsible for the consumer-facing application on the Web, Android, and iOS platform. In this role, you'll work with a specific team within this organization to drive related technical & product strategy, operations, architecture, and execution for one of the largest sites in the world.

Poster Experience specifically focuses on the user journey, which is the main source of user content for the product. They aim to make it easier, faster, and smarter to create and participate in conversations, and we drive several core product metrics for the entire ecosystem.

Job Description:

Job Title: LLM Machine Learning Engineer

Location: Pan India

Experience: 6+ yrs.

Employment Type: Contract to hire

Work Mode: Remote

Notice Period: - Immediate joiners

Roles and Responsibilities:

  • Proven experience with JAX and NumPy in real-world ML projects
  • Strong understanding of TensorFlow, including internals and model structure
  • Experience with migrating models between frameworks and ensuring parity
  • Solid grasp of ML training workflows, loop mechanics, and evaluation metrics
  • Strong Python programming skills and clean, modular code practices
  • Familiarity with ML experiment tracking and reproducibility tools (e.g., MLFlow, Weights & Biases)

company icon

People Prime Worldwide

calendar icon

Today

Software Engineer - Backend

Our engineering team is hiring a backend software engineer to contribute to the development of our Warehouse Management System (WMS) and its companion Handy Terminal device, both of which are integral to our logistics product suite. These systems are designed to seamlessly integrate with our state-of-the-art ASRS systems. The team's mission is to build and maintain a robust, tested, and high-performance backend architecture, including databases and APIs, shared across all deployments. While the role emphasizes strong software development and engineering practices, we also value open communication and a collaborative team spirit.

This role is open for candidates who can be based either in Chennai (India) or Tokyo (Japan)

In this role, you will:

  • Design, develop, and maintain a key component that supports the efficient flow of supply chain operations.
  • Enhance code quality and ensure comprehensive test coverage through continuous improvement.
  • Collaborate effectively with cross-functional development teams to integrate solutions and align best practices.

Requirements

Minimum Qualifications:

  • 3-5 years of professional experience with Python, with a focus on versions 3.10 and above.
  • Practical experience working with web frameworks such as FastAPI or Django.
  • Strong understanding of SQL database principles, particularly with PostgreSQL.
  • Proficiency in testing and building automation tools, including pytest, GitHub Actions, and Docker.

Bonus Points:

  • Experience with NoSQL databases, particularly with Redis.
  • Practical experience with asynchronous programming (e.g., asyncio) or message bus systems.
  • Ability to clearly articulate technology choices and rationale (e.g., Tornado vs. Flask).
  • Experience presenting at conferences or meetups, regardless of scale.
  • Contributions to open-source projects.
  • Familiarity with WMS concepts and logistics-related processes.

Is This the Right Role for You?

  • You are motivated by the opportunity to make a tangible impact and deliver significant business value.
  • You appreciate APIs that are thoughtfully designed with clear, well-defined objectives.
  • You thrive on understanding how your work integrates and contributes to a larger, cohesive system.
  • You are proactive and self-directed, identifying potential issues and gaps before they become problems in production.

Benefits

  • Competitive salary package.
  • Opportunity to work with a highly talented and diverse team.
  • Comprehensive visa and relocation support.
company icon

Rapyuta Robotics

calendar icon

Today

GIS Engineer

Let me tell you about the role

The Geospatial Technology Engineer helps to deliver our pioneering Geospatial Platform. As part of our Geospatial Technology Team, within Oil and Gas Technology, you'll work on meaningful projects across production, projects, subsurface, wells and crisis, gaining exposure to pioneering technologies and real-world applications. You will work closely with the various teams and stakeholders to provide specialist platform support and engineering.

The Enterprise Technology Engineer is a technical role for those who have a passion for data and a zeal to unlock and use Geospatial Technology to inform better business decisions.

What you will deliver

  • Silent delivery of the Geospatial Platform. Deliver support requests.
  • Assist the Geospatial business teams to develop and deploy new geospatial techniques and technologies.
  • Conduct maintenance and evergreening activities for our Geospatial Technology suite (e.g. upgrades, troubleshooting, patching, integrations, customisations).
  • Analyse, debug and rectify issues arising out of QA, customer and end user testing.

What you will need to be successful (experience and qualifications)

  • Hands-on experience on ESRI suite of products.
  • Experience - 5+ years in a similar role
  • Esri product troubleshooting skills.
  • Some experience in Esri system installation and upgrade.
  • Hands-on experience in ArcPy, ArcGIS API for JavaScript.
  • Analyze, debug and rectify issues arising out of QA, customer and end user testing.
  • Participate in daily scrums, cater to other documentation needs in following an agile methodology.
  • Collaborate with global teams and businesses to validate various use-cases and create innovative solution.
  • Person would be responsible for coming up with efficient algorithms to solve complex business problems.
  • Present information to clients and stakeholders in verbal or written format.
  • System Performance testing via Esri supported tools.
  • Basic knowledge of Esri system design.

Product Experience on:

  • Microsoft Azure Cloud (certification would be preferred)
  • ArcGIS Enterprise (ArcGIS Portal, ArcGIS Server and roles, Datastore(s
  • ArcGIS Online
  • ArcSDE v11.0
  • ArcGIS Pro v3.x
  • Survey123
  • Esri Mobile Apps - Field Maps, Quick Capture etc.
  • FME Form & Flow v2023.x+ administration
  • Python, JavaScript, PowerShell scripting
  • ArcGIS API for Python
  • Microsoft SQL Server 2019+

Qualifications:

Master's degree, GIS, Geographic Science, Computer Science, Survey Engineering, Related Field, or related Bachelor's degree with some relevant experience

About bp

bp is a global energy business with a purpose to reimagine energy for people and our planet. We aim to be a very different kind of energy company by 2030, helping the world reach net zero and improving people's lives. We are committed to creating a diverse and inclusive environment where everyone can thrive. Join bp and become part of the team building our future!

We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

company icon

bp

calendar icon

Today

Cloud Engineer

About Company :

They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what's now to what's next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society.

About Client:

Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion ( 35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations.

Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million ( 4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines.

Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering-reflecting its strategic commitment to driving innovation and value for clients across industries.

Job Title: AWS SME

Location: Mumbai

Experience: 8+ yrs

Job Type: Contract to hire(Min 1+ yr)

Notice Period: Immediate joiners

Job Description:

AWS infrastructure management Understanding and troubleshooting AWS resources with the help of DevOps practices

Must have indepth knowledge and hands on experience on EC2 S3 EBS EKS ECR RDSIAMetc services in AWS

creating and modifying IAM roles polices restrict to access to limited resources setting up permission boundaries with the help of Terraform scripts

Hands on experience on Terraform Building AWS infrastructure using terraform scripts

Good knowledge on terraforms variables libraries modules

Very good knowledge in Python programming

Experience on Azure Devops Pipeline development and creations

Hands on experience on GIT commands and branching strategy is required

Hands on experience on CICD tools Azure DevOps

Good understanding on EKS ECR and able to create EKS clusters using Terraform scripts cluster upgrades etc

Excellent communication skills and problemsolving skills is mandate

company icon

People Prime Worldwide

calendar icon

Today

Backend Engineer

Experience: Two to five years building production Python backends

About Terrabase

Terrabase cuts hours of manual analysis to seconds. Our AI agents find the answer and trigger the next workflow automatically. Your work keeps that engine reliable, fast, and secure for enterprise customers.

Why this role matters

Every AI insight we deliver passes through a Python service you will own. Low-latency APIs, resilient task queues, and rock-solid observability make the difference between user delight and churn. You will design and scale that backbone.

What will you do

  • Build FastAPI services that expose search, retrieval, and orchestration endpoints.
  • Scale async jobs with Celery and RabbitMQ, using Redis for caching and rate control.
  • Containerize with Docker and deploy repeatably on EC2 using Terraform or similar tooling.
  • Monitor with Prometheus, Grafana, and alerting hooks; chase p99 latency and error budgets.
  • Automate CI and CD so every merge ships safely and rolls back cleanly.
  • Harden security, secrets management, and SOC two logging.
  • Document clear run books and post-mortems so anyone can debug at two in the morning.
  • Collaborate with AI and front-end teams to hit sub-second query times and zero downtime releases.

What we look for

  • Two to five years of writing clean, idiomatic Python in production.
  • Strong work ethic and bias for ownership. You identify problems, propose fixes, and drive them to closure.
  • Clear, systematic thinker. Your design docs read like thinking in public, and your code structure reflects first principles reasoning.
  • Proficient Python engineer comfortable with type hints, pytest, and modern packaging.
  • Proven FastAPI or similar web framework expertise.
  • Hands-on experience with Celery, RabbitMQ, and Redis at meaningful scale.
  • Confident with Docker, Linux networking, and AWS EC2 fundamentals.
  • Solid grasp of monitoring, tracing, and log aggregation.
  • Clear communicator who values concise design docs and thoughtful code reviews.

Bonus points

  • Experience meeting SOC two or HIPAA audit requirements.
  • Prior work on high-throughput data pipelines or vector stores.
  • Contributions to open-source backend or DevOps projects.

Life at Terrabase

We are a sharp, humble, fully remote crew that values deep focus and fast feedback. Your code reaches real customers every week, backed by generous AWS budgets and a culture that favors clear thinking over long meetings.

Terrabase is an equal opportunity employer. We celebrate diversity and are committed to building an inclusive environment for every team member.

company icon

Terrabase

calendar icon

Today

Senior Data Engineer

Responsibilities

  • Gather and assemble large, complex sets of data that meet non-functional and functional business requirements.
  • Skills: SQL, Python, R, Data Modeling, Data Warehousing, AWS (S3, Athena).
  • Create new data pipelines or enhance existing pipelines to accommodate non-standard data formats from customers.
  • Skills: ETL T ools (e.g., Apache NiFi, T alend), Python (Pandas, PySpark), AWS Glue, JSON, XML, YAML.
  • Identify, design, and implement internal process improvements, including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Skills: Apache Airflow, T erraform, Kubernetes, AWS Lambda, CI/CD pipelines, Docker.
  • Build andmaintain required infrastructure for optimal extraction, transformation, and loading (ETL) of data from various data sources using AWS and SQL technologies.
  • Skills: SQL, AWS Redshift, AWS RDS, EMR (Elastic MapReduce), Snowflake.
  • Use existing methods or develop new tools/methods to analyze the data and perform required data sanity validations to ensure completeness and accuracy as per technical and functional requirements.
  • Skills: Python (NumPy, Pandas), Data Validation T ools, T ableau, Power BI.
  • Work with stakeholders including Customer Onboarding, Delivery, Product, and other functional teams, assisting them with any data-related technical or infrastructure-related issues.
  • Skills: Stakeholder Communication, JIRA, Agile Methodologies.
  • Provide actionable insights into key data metrics (volumes, trends, outliers, etc.), highlight any challenges/improvements, and provide recommendations and solutions to relevant stakeholders.
  • Skills: Data Analysis, Data Visualization T ools (T ableau, Looker), Advanced Excel.
  • Coordinate with the Technical Program Manager (TPM) to prioritize discovered issues in the Data Sanity Report and own utility communications.
  • Skills: Project Management T ools, Reporting T ools, Clear DocumentationPractices.

Expectations from the Role:

  • Be the primary owner of the data sanity process internally.
  • Skills: Data Quality Management, Python (Data Validation Libraries), SQL Auditing.
  • Run defined sanity checks/validations on utility data to ensure data accuracy and completeness.
  • Skills: Python, SQL, QA T ools, AWS QuickSight.
  • Create the data sanity report using the established process and template.
  • Skills: Report Automation T ools, Excel Macros, Python.
  • Provide actionable insights into key data metrics, highlighting any challenges/improvements in the data.
  • Skills: Business Intelligence T ools (T ableau, Power BI), Statistical Analysis T ools.
  • Assess issues in the data as per the technical and functional requirements of the implementation and flag critical issues to the TPM.
  • Skills: Root Cause Analysis, Data Profiling, Monitoring T ools.
  • Implement internal solutions to handle all issues in the data, such as creating custom transformers or pre-ingestion logic.
  • Skills: Python (ETL Scripts), PySpark, AWS Lambda, JSON Processing.
  • Maintain and use data pipelines and processes to ingest required data in Bidgely.
  • Skills: AWS Data Pipeline, Apache Kafka, SQL.
  • In case of custom utility data formats, create custom data pipelines in Bidgely to support the ingestion process.
  • Skills: Python, Apache Spark, AWS Glue.
  • On rare occasions, support utility data integration work by working with the utility's Datalake/IT team to understand their data structure, data sources, and formats, and assist in extracting data from their source systems.
  • Skills: Data Lake Architecture (Azure Data Lake, AWS Lake Formation), API Integration, Data Formats (Parquet, ORC).
company icon

Bidgely

calendar icon

Today

Cloud Engineer

Job Description

1Role - AWS Technical

2Required Technical Skill Set - AWS and Devops

3Desired Experience Range - 4-6

4Location of Requirement - chennai

Desired Competencies (Technical/Behavioral Competency)

Must-Have /Responsibilities

  • Solid knowledge & Experience in 5 years of design and implementation experience in AWS cloud computing
  • Good Experience in Amazon Kubernetes Services
  • Good Experience in AWS Network Firewalls and WAF and AWS Security hub and AWS Shield
  • Good Experience in AWS Cognito services
  • Experience in infrastructure automation through DevSecOps in Cloud environments and familiarity with CI/CD tools such as AWS Code Commit, AWS CI/CD pipelines and Gitlab, Jfrog, Jenkins etc
  • Architecting, building, and maintaining cost-efficient, scalable AWS cloud environments for the customer Takenaka. Define Account Structure, Network Structure, Connectivity design, Logging-Monitoring solution design etc. at enterprise level.
  • Good understanding of AWS Identity and Access Management (IAM) and AWS Single Sign-On
  • Understanding of EC2, ECS,EKS, AWS Fargate, EFS, Lambda Functions etc.
  • Knowledge of SNS Topics, SQS, SES, CloudWatch, CloudWatch Events and event bridge.
  • Understanding of AWS networking concepts (for example, Amazon VPC, AWS Direct Connect, AWS VPN, transitive routing, AWS container services)
  • AWS networking concepts (for example, Amazon VPC, AWS Direct Connect, AWS VPN, transitive routing, AWS container services)
  • Hybrid DNS concepts (for example, Amazon Route 53 Resolver, on-premises DNS integration)
  • AWS backup and restoration for EC2, EBS, RDS etc. and Configuring disaster recovery solutions.
  • AWS cost and usage monitoring tools (for example, Cost Explorer, Trusted Advisor, AWS Pricing Calculator)
  • AWS networking services and DNS (for example, Direct Connect, AWS Site-to-Site VPN, Route 53)
  • Strong understanding of Operating systems (RH Linux and Windows) and troubleshooting methodologies.
  • Responsible for Patching Linux/Windows servers on AWS.
  • Architecting and Provisioning Windows and Linux environments in AWS.
  • Good understanding of Active Directory and its concepts.
  • Basic understanding of MS SQL and MYSQL administration.
  • Design Scalable Foundation across multiple AWS Accounts for Deploying Datalake, IOT, Analytics based solutions.
  • Design & Review Network, Security Architectures defined by different application Teams.
  • Define & periodically enhance the AWS Security Guideline for the Customer.
  • Identify Potential Areas of improvement & design the solutions according to the Customer.
  • Prepare Discussion material for Meetings with Customer in an easily explainable way.
  • Participate in Design review meetings with Enterprise Architects & AWS WAR review as and when required.
  • Work with the Team to resolve any Technical Challenges related to AWS Infrastructure Implementation.
  • Perform internal quality review checks to ensure AWS best practices and security guidelines have been met.
  • Prepare & Review SOP documents for task handover to OPS team post deployment.
  • Prepare various guidelines (platform usage, naming convention, security etc.) and prepare reusable materials for the team.
  • Communicating with internal teams, internal/external stakeholders and help building applications to meet project needs.
  • Participate in various Technical KT session for the Team as and when needed

Qualifications

Bachelor's degree in computer science, a related technical field, or equivalent practical experience.

Strong track record of implementing AWS services in a variety of distributed computing environments, solutioning complex AWS architecture

Certifications

Preferred Certifications:

• AWS Certifications (one or more preferred):

• AWS Certified Solutions Architect - Associate Level or Professional

• AWS Certified Devops Engineer

company icon

Tata Consultancy Services

calendar icon

Today

Remote Python AI Engineer - 17852

Work on Real-World Problems with Global Tech Experts

Join a leading U.S.-based technology company as a Python Developer / AI Engineer, where you'll tackle real-world challenges and build innovative solutions alongside top global experts. This is a fully remote, contract-based opportunity ideal for developers passionate about Python, data analysis, and AI-driven work.

Key Responsibilities:

  • Write efficient, production-grade Python code to solve complex problems.
  • Analyze public datasets and extract meaningful insights using Python and SQL.
  • Collaborate with researchers and global teams to iterate on data-driven ideas.
  • Document all code and development decisions in Jupyter Notebooks or similar platforms.
  • Maintain high-quality standards and contribute to technical excellence.

Job Requirements:

  • Open to all levels: junior, mid-level, or senior engineers.
  • Degree in Computer Science, Engineering, or equivalent practical experience.
  • Proficient in Python programming for scripting, automation, or backend development.
  • Experience with SQL/NoSQL databases is a plus.
  • Familiarity with cloud platforms (AWS, GCP, Azure) is advantageous.
  • Must be able to work 5+ hours overlapping with Pacific Time (PST/PT).
  • Strong communication and collaboration skills in a remote environment.

Perks & Benefits:

  • Work on cutting-edge AI and data projects impacting real-world use cases.
  • Collaborate with top minds from Meta, Stanford, and Google.
  • 100% remote - work from anywhere.
  • Contract role with flexibility and no traditional job constraints.
  • Competitive compensation in USD, aligned with global tech standards.

Selection Process:

  • Shortlisted developers may be asked to complete an assessment.
  • If you clear the assessment, you will be contacted for contract assignments with expected start dates, durations, and end dates.
  • Some contract assignments require fixed weekly hours, averaging 20/30/40 hours per week for the duration of the contract assignment.
company icon

Turing

calendar icon

Today

Full Stack Gen AI Engineer

Job Title: Full Stack Gen AI Engineer

Location: Gurgaon

Experience Required: 2 years in full stack development with proven experience in designing, building, and deploying Generative AI solutions, including agentic systems.

Job Type: Full-time

About Nebula9.ai: Nebula9.ai is a leading company in Applied Generative AI solutions, focusing on transforming various industries with innovative AI-driven strategies. Our mission is to empower businesses through cutting-edge AI solutions, including sophisticated Agentic AI systems and interoperability protocols, driving growth and operational efficiency in an ever-evolving digital landscape. We value innovation, integrity, collaboration, and excellence in everything we do.

Job Description:

Nebula9.ai is seeking a highly skilled and visionary Full Stack Generative AI Engineer to join our pioneering team in Gurgaon. The ideal candidate will possess a robust background in end-to-end full stack development, coupled with deep, hands-on experience in leveraging Generative AI technologies to build impactful solutions, with a strong emphasis on AI Agents and Multi-Agent frameworks . You will be instrumental in architecting, developing, and deploying sophisticated AI applications, from conceptualization through to production, including systems that leverage emerging standards like Agent-to-Agent (A2A) and Model Context Protocol (MCP) . This role requires strong proficiency across the full development stack (front-end, back-end, databases, and cloud infrastructure), advanced expertise in developing and integrating Generative AI models, and a solutioning DNA to tackle complex challenges.

Responsibilities:

  • Architect, develop, test, and deploy robust and scalable applications utilizing diverse Generative AI models and techniques, with a significant focus on RAG applications, Conversational AI, AI Agents, and Multi-Agent frameworks.
  • Lead the full-stack development efforts for Gen AI projects, ensuring seamless integration of AI capabilities into user-facing applications and backend systems, primarily using the MERN/MEAN stack.
  • Design and implement systems leveraging inter-agent communication protocols (e.g., A2A) and tool/capability integration protocols (e.g., MCP) to build modular and interoperable agentic solutions.
  • Collaborate closely with AI architects, data scientists, product managers, and clients to define requirements, design innovative solutions, and deliver high-quality AI-powered products.
  • Design and implement efficient data pipelines, MLOps workflows, and infrastructure for training, fine-tuning, evaluating, and serving Generative AI models and AI Agents.
  • Leverage tools and frameworks such as Langchain, LangSmith, LangServe, LlamaIndex, Flowise/LangFlow, and AutoGen to construct and manage complex AI applications and agentic systems.
  • Work extensively with vector databases, embedding models, and advanced prompt engineering techniques to optimize information retrieval and agent performance.
  • Implement and manage API development and integrations for internal services, third-party AI platforms (e.g., OpenAI, Google Gemini, Azure AI), and agent communication endpoints.
  • Optionally, fine-tune open-source or proprietary LLMs to meet specific client needs and improve domain-specific performance for AI Agents.
  • Drive the adoption of best practices in software engineering, AI development, and cloud architecture (AWS, Azure, GCP), including containerization (Docker, Kubernetes).
  • Actively research and stay ahead of the curve with the latest advancements in Generative AI, Agentic AI, AI interoperability protocols (like A2A, MCP) , Machine Learning, and full stack technologies.
  • Champion and leverage GenAI tools and techniques to achieve 10X productivity improvements for yourself, the team, and in the solutions delivered to clients.
  • Mentor junior developers, participate actively in code reviews, contribute to technical documentation, and foster a culture of technical excellence and innovative solutioning .
  • Troubleshoot, debug, and optimize AI applications and agentic systems for performance, scalability, and reliability.

Required Skills & Experience:

  • Professional experience in full stack development , with a significant portion dedicated to building and deploying production-grade applications.
  • Strong proficiency in core programming languages: Python (essential for AI/ML), JavaScript, and TypeScript (essential for full-stack, especially MERN/MEAN) .
  • Proven expertise in the MERN stack or MEAN stack
  • Deep understanding and hands-on experience with Generative AI concepts, models (LLMs, diffusion models), and architectures, with a specific focus on building AI Agents and Agentic Systems.
  • Demonstrable expertise in advanced prompt engineering techniques and strategies for various LLMs and agent interactions.
  • Practical experience with Gen AI frameworks such as Langchain (required), LlamaIndex, and workflow/agent builders like Flowise/LangFlow or AutoGen.
  • Experience working with and managing vector databases and embedding models for RAG systems and agent knowledge.
  • Familiarity or experience with concepts and potential implementation of agent communication/interoperability protocols like A2A and MCP is highly desirable.
  • Experience with fine-tuning LLMs (preferred, but strong understanding is valuable).
  • Proficiency in API design, development (RESTful, GraphQL), and integration, including experience with platforms like OpenAI (GPT-4), Google Gemini, and Microsoft Azure AI services.
  • Solid understanding and experience with cloud platforms (AWS, Azure, or GCP) , including services relevant to AI/ML workloads and application hosting.
  • Experience with containerization technologies (Docker, Kubernetes) and CI/CD pipelines.
  • Proficiency with version control systems (Git) and agile/collaborative development methodologies.
  • Demonstrated solutioning DNA : ability to understand complex problems, design innovative and practical solutions, and see them through to implementation.
  • Proven ability to leverage GenAI for enhancing personal and team productivity .
  • Excellent analytical, problem-solving, and critical thinking skills.
  • Strong communication, collaboration, and leadership potential.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, Artificial Intelligence, or a closely related field.
  • A strong portfolio of projects demonstrating expertise in full-stack development and the successful application of Generative AI technologies, ideally including examples of AI agents, RAG systems, or complex AI-driven workflows .

How to Apply:

Interested candidates are invited to send their resume and a compelling cover letter to with the subject line "Application for Full Stack Generative AI Engineer (Agentic Systems)- JOB00134". Please include links to your GitHub profile, and/or a portfolio showcasing relevant projects and contributions.

company icon

Nebula9.ai

calendar icon

Today

Remote Python AI Engineer - 17852

Work on Real-World Problems with Global Tech Experts

Join a leading U.S.-based technology company as a Python Developer / AI Engineer, where you'll tackle real-world challenges and build innovative solutions alongside top global experts. This is a fully remote, contract-based opportunity ideal for developers passionate about Python, data analysis, and AI-driven work.

Key Responsibilities:

  • Write efficient, production-grade Python code to solve complex problems.
  • Analyze public datasets and extract meaningful insights using Python and SQL.
  • Collaborate with researchers and global teams to iterate on data-driven ideas.
  • Document all code and development decisions in Jupyter Notebooks or similar platforms.
  • Maintain high-quality standards and contribute to technical excellence.

Job Requirements:

  • Open to all levels: junior, mid-level, or senior engineers.
  • Degree in Computer Science, Engineering, or equivalent practical experience.
  • Proficient in Python programming for scripting, automation, or backend development.
  • Experience with SQL/NoSQL databases is a plus.
  • Familiarity with cloud platforms (AWS, GCP, Azure) is advantageous.
  • Must be able to work 5+ hours overlapping with Pacific Time (PST/PT).
  • Strong communication and collaboration skills in a remote environment.

Perks & Benefits:

  • Work on cutting-edge AI and data projects impacting real-world use cases.
  • Collaborate with top minds from Meta, Stanford, and Google.
  • 100% remote - work from anywhere.
  • Contract role with flexibility and no traditional job constraints.
  • Competitive compensation in USD, aligned with global tech standards.

Selection Process:

  • Shortlisted developers may be asked to complete an assessment.
  • If you clear the assessment, you will be contacted for contract assignments with expected start dates, durations, and end dates.
  • Some contract assignments require fixed weekly hours, averaging 20/30/40 hours per week for the duration of the contract assignment.
company icon

Turing

calendar icon

Today

Engineer - QA T8

Talent500 is hiring for one of our client

About American Airlines:

To Care for People on Life's Journey . We have a relentless drive for innovation and excellence. Whether you're engaging with customers at the airport or advancing our IT infrastructure, every team member plays a vital role in shaping the future of travel.

At American's Tech Hubs, we tackle complex challenges and pioneer cutting-edge technologies that redefine the travel experience. Our vast network and diverse customer base offer unique opportunities for engineers to solve real-world problems on a grand scale.

Join us and immerse yourself in a dynamic, tech-driven environment where your creativity and unique strengths are celebrated. Experience the excitement of being at the forefront of technological innovation, where every day brings new opportunities to make a meaningful impact.

About Tech Hub in India:

American's Tech Hub in Hyderabad, India, is our newest location and home to team members who drive technical innovation and engineer unrivalled digital products to best serve American's customers and team members. With U.S. tech hubs in Dallas-Fort Worth, Texas and Phoenix, Arizona, our new location in Hyderabad, India, positions American to deliver industry-leading technology solutions that create a world-class customer experience.

Why you will love this job:

  • As one diverse, high-performing team dedicated to technical excellence, you will focus relentlessly on delivering unrivaled digital products that drive a more reliable and profitable airline.
  • The Software domain refers to the area within Information Technology that focuses on the development, deployment, management, and maintenance of software applications that support business processes and user needs. This includes development, application lifecycle management, requirement analysis, QA, security & compliance, and maintaining the applications and infrastructure.

What you will do:

As noted above, this list is intended to reflect the current job but there may be additional functions that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations.

  • Writes, tests, and documents technical work products (e.g., code, scripts, processes) according to organizational standards and practices
  • Devotes time to raising the quality and craftsmanship of products and systems
  • Conducts root cause analysis to identify domain level problems and prescribes action items to mitigate
  • Designs self-contained systems within a team's domain, and leads implementations of significant capabilities in existing systems
  • Coaches team members in the execution of techniques to improve reliability, resiliency, security, and performance
  • Decomposes intricate and interconnected designs into implementations that can be effectively built and maintained by less experienced engineers
  • Anticipates trouble areas in systems under development and guides the team in instrumentation practices to ensure observability and supportability
  • Defines test suites and instrumentation that ensures targets for latency and availability are being consistently met in production
  • Leads through example by prioritizing the closure of open vulnerabilities
  • Evaluates potential attack surfaces in systems under development, identifies best practices to mitigate, and guides teams in their implementation
  • Leads team in the identification of small batches of work to deliver the highest value quickly
  • Ensures reuse is a first-class consideration in all team implementations and is a passionate advocate for broad reusability
  • Formally mentors teammates and helps guide them to and along needed learning journeys
  • Observes their environment and identifies opportunities for introducing new approaches to problems

All you will need for success:

Minimum Qualifications - Education & Prior Job Experience:

  • Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training
  • 3+ years of experience designing, developing, and implementing large-scale solutions in production environments
  • Master's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS / MIS), Engineering or related technical discipline, or equivalent experience / training

Preferred Qualifications - Education & Prior Job Experience:

  • Airline Industry experience

Mandatory Skills:

  • Java / Python,
  • Selenium / TestNG / Postman,
  • Load Runner (load testing/ Performance monitoring)

Skills, Licenses & Certifications:

Proficiency with the following technologies:

  • Programming Languages: Java, Python, C#, Javascript / Typescript
  • Frameworks: Spring / Spring Boot, FastAPI
  • Front End Technologies: Angular / React
  • Deployment Technologies: Kubernetes, Docker
  • Source Control: GitHub, Azure DevOps
  • CICD: GitHub Actions, Azure DevOps
  • Data management: PostgreSQL, MongoDB, Redis
  • Integration / APIs Technologies: Kafka, REST, GraphQL
  • Cloud Providers such as Azure and AWS
  • Test Automation: Selenium, TestNG, Postman, SonarQube, Cypress, JUnit / NUnit / PyTest, Cucumber, Playwright, Wiremock / Mockito / Moq
  • Ability to optimize solutions for performance, resiliency and reliability while maintaining an eye toward simplicity
  • Ability to concisely convey ideas verbally, in writing, in code, and in diagrams
  • Proficiency in object-oriented design techniques and principles
  • Proficiency in Agile methodologies, such as SCRUM
  • Proficiency in DevOps Toolchain methodologies, including Continuous Integration and continuous deployment

Language, Communication Skills, & Physical Abilities:

  • Ability to effectively communicate both verbally and written with all levels within the organization
  • Physical ability necessary to safely and successfully perform the essential functions of the position, with or without any legally required reasonable accommodations that do not pose an undue hardship.

Note: If the Company has reason to question an employee's physical ability to safely and/or successfully perform the position's essential job functions, the HR team generally will engage in an interactive process to determine whether a reasonable accommodation is appropriate. HR (working with the operation) ordinarily first speaks with the team member directly and they mutually identify the physical demands of the job that are or may be impacted by the employee's obvious or known condition. Then, if necessary, HR would request medical documentation from the team member's treating physician or others to confirm the employee's ability to perform those essential job functions safely and successfully.

company icon

Talent500

calendar icon

Today

Automation Engineer UI Path

Role: Automation Engineer / RPA Developer (UI Path)

Location: Mumbai

Experience: 3+ Years

Mode: Full-Time

We are looking for an experienced Automation Engineer to join our automation development team. The ideal candidate will have a strong background in designing and building end-to-end automation solutions using UiPath and Microsoft Power Platform tools. The role involves working closely with business stakeholders to assess, design, develop, and deliver scalable automation solutions that enhance operational efficiency and business value.

Responsibilities:

  • Perform independent feasibility assessments to determine automation potential.
  • Collaborate with business stakeholders to analyse existing processes and identify automation opportunities.
  • Conduct requirement-gathering sessions through workshops, interviews, and walkthroughs.
  • Create clear documentation such as Process Design Documents (PDD), user stories, and process maps.
  • Design automation solutions aligned with business goals and prepare detailed Solution Design Documents.
  • Develop and maintain automation workflows using UiPath and Power Platform.
  • Ensure code quality by following development standards and conducting peer reviews.
  • Create and execute test plans, including unit testing, integration testing, and UAT.
  • Participate in continuous improvement efforts by identifying process optimization opportunities.
  • Coordinate with IT and infrastructure teams to manage environment setup and deployment activities.
  • Ensure timely delivery of assigned tasks and compliance with organizational standards and policies.
  • Explore and propose the integration of cognitive elements like OCR, AI/ML, and image recognition into automation solutions.

Required Skills:

  • 3+ years of hands-on experience in UiPath RPA development.
  • Strong experience with Microsoft Power Platform (Power Automate, Power Apps).
  • UiPath certification (Advanced RPA Developer or Business Analyst) preferred.
  • Proven experience in end-to-end automation delivery, including requirement analysis, design, development, and testing.
  • Strong understanding of SDLC and Agile methodologies (Scrum/Kanban).
  • Excellent communication, analytical thinking, and stakeholder management skills.
  • Proficiency in MS Office tools.

Desired Skills:

  • Experience with Python scripting, Azure AI, Azure Apps, or VBA.
  • Exposure to cognitive technologies like OCR, image recognition, and AI/ML integration.
  • Familiarity with project management and collaboration tools like JIRA, Confluence, ServiceNow, and Azure DevOps.
  • Prior experience in working with IT operations or support teams is a plus

company icon

Atyeti Inc

calendar icon

Today

Software Engineer-Power Bi & SharePoint

Company Description:

FUTURRIZON TECHNOLOGIES PVT. LTD. is a tech company focused on helping organizations automate their businesses in a cost-effective way. We specialize in Microsoft 365 Suite, including Power Apps, Power Automate, Power BI, SharePoint, Teams, and Office Apps. We also specialize in Data Engineering and Data Science Technologies.

Role Description:

We are looking for a skilled SharePoint & Power Platform Developer with hands-on experience in PowerApps, Power Automate, SharePoint Online, and SPFx. The role involves designing and developing custom SharePoint solutions and automation workflows to support business processes and collaboration.

Key Responsibilities:

  • Design and develop custom solutions using PowerApps and Power Automate
  • Work extensively on SharePoint Online, building and maintaining custom features and web parts using SPFx
  • Utilize JavaScript, jQuery, React with TypeScript, and SharePoint JSOM/CSOM REST APIs to enhance user experience
  • Use development tools like Visual Studio Code, Gulp, and Yeoman for efficient SPFx development
  • Follow Microsoft best practices and guidelines for SPFx development
  • Collaborate with business users and stakeholders to understand requirements and provide scalable SharePoint and Power Platform solutions
  • Use Power BI to create dashboards and data visualizations as needed

Qualifications:

  • Bachelor's degree in computer science, IT, or related field
  • 1+ years of experience with SharePoint Online and SPFx
  • Proficient in JavaScript, jQuery, React (with TypeScript)
  • Strong knowledge of PowerApps, Power Automate, SharePoint Designer
  • Experience with JSOM/CSOM and SharePoint REST APIs
  • Familiar with tools like Visual Studio Code, Gulp, and Yeoman
  • Understanding of Microsoft best practices for SPFx
  • Basic Power BI knowledge

company icon

FUTURRIZON TECHNOLOGIES PVT. LTD.

calendar icon

Today

Software Engineer II

Position Overview

We are a seed-funded startup focused on using state-of-the-art AI technologies to revolutionize the credit industry. Our team consists of experts in machine learning and software engineers who have worked at top-tier US tech companies like Apple, Amazon, etc , and we are passionate about using AI to improve access to credit information and due diligence for businesses. We have the product on the market, the first clients, and sufficient runway. We are looking for a skilled Software Engineer with a strong foundation in Python and experience in backend development and data processing. The ideal candidate will have at least two years of professional experience in software development, with a significant portion dedicated to Python programming.

Key Responsibilities

  • Backend Development: Build, maintain, and optimize backend systems using Python. Ensure the reliability, efficiency, and scalability of our data services.
  • Data Processing: Develop and maintain processes for handling large datasets, including data collection, storage, transformation, and analysis.
  • API Development: Design and implement APIs that enable seamless data retrieval and manipulation for frontend applications.
  • Collaboration and Integration: Work with cross-functional teams to integrate backend services with other software systems and solutions within the company.
  • Performance Tuning: Monitor system performance, troubleshoot issues, and implement optimizations to improve system efficiency.
  • Code Quality: Maintain high code standards, write clean, well-documented, and testable code, and contribute to code reviews.

Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • At least 2 years of professional software development experience with a focus on Python.
  • Proven track record of backend development and creating efficient data processing pipelines.
  • Experience with RESTful API design and development.
  • Familiarity with SQL and NoSQL databases, as well as data modeling techniques.
  • Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI).
  • Strong analytical skills and the ability to work in a team environment.
  • Excellent problem-solving abilities and attention to detail.

What We Offer

  • An opportunity to work on challenging problems at the intersection of data and backend development.
  • A dynamic environment that fosters professional growth and learning.
  • Competitive salary and benefits package.

company icon

CredHive

calendar icon

Today

Senior Data Engineer

Project Description:

  • Are you passionate about leveraging the latest technologies for strategic change? Do you enjoy problem solving in clever ways? Are you organized enough to drive change across complex data systems? If so, you could be the right person for this role.
  • As an experienced data engineer, you will join a global data analytics team in our Group Chief Technology Officer / Enterprise Architecture organization supporting our strategic initiatives which ranges from portfolio health to integration.

Responsibilities:

  • • Help Group Enterprise Architecture team to develop our suite of EA tools and workbenches
  • • Work in the development team to support the development of portfolio health insights
  • • Build data applications from cloud infrastructure to visualization layer
  • • Produce clear and commented code
  • • Produce clear and comprehensive documentation
  • • Play an active role with technology support teams and ensure deliverables are completed or escalated on time
  • • Provide support on any related presentations, communications, and trainings
  • • Be a team player, working across the organization with skills to indirectly manage and influence
  • • Be a self-starter willing to inform and educate others

Mandatory Skills Description:

  • • B.Sc./M.Sc. degree in computing or similar
  • • 5-8+ years' experience as a Data Engineer, ideally in a large corporate environment
  • • In-depth knowledge of SQL and data modelling/data processing
  • • Strong experience working with Microsoft Azure
  • • Experience with visualisation tools like PowerBI (or Tableau, QlikView or similar)
  • • Experience working with Git, JIRA, GitLab
  • • Strong flair for data analytics
  • • Strong flair for IT architecture and IT architecture metrics
  • • Excellent stakeholder interaction and communication skills
  • • Understanding of performance implications when making design decisions to deliver performant and maintainable software.
  • • Excellent end-to-end SDLC process understanding.
  • • Proven track record of delivering complex data apps on tight timelines
  • • Passionate about development with focus on data and cloud
  • • Analytical and logical, with strong problem solving skills
  • • A team player, comfortable with taking the lead on complex tasks
  • • An excellent communicator who is adept in, handling ambiguity and communicating with both technical and non-technical audiences
  • • Comfortable with working in cross-functional global teams to effect change
  • • Passionate about learning and developing your hard and soft professional skills

Nice-to-Have Skills Description:

  • • Experience working in the financial industry
  • • Experience in complex metrics design and reporting
  • • Experience in using artificial intelligence for data analytics

Languages:

  • English: C1 Advanced
company icon

Luxoft India

calendar icon

Today

Full Stack Engineer

Company Description

We are transforming recruitment with AI-driven video interviewing and skills assessments. Our platform helps businesses streamline the hiring process, making it faster, smarter, and more efficient. With customizable tests, remote video interviews, and data-driven insights, we help you find the best talent quickly and confidently. Save time, reduce bias, and make better hiring decisions with us.

Role Description

This is a contract remote role for a Full Stack Engineer. The Full Stack Engineer will be responsible for developing and maintaining both front-end and back-end web applications. Day-to-day tasks will include coding, debugging, and collaborating with the design and product teams to ensure seamless functionality and user experience.

Qualifications

  • 1-2 years of experience in full-stack web development
  • Strong front-end skills, especially in Next.js and CSS
  • Solid experience with back-end technologies, framework and working with databases
  • Familiarity with modern web development tools and best practices
  • Strong problem-solving abilities and attention to detail
  • Ability to work independently and manage your own time
  • Degree in Computer Science, Engineering, or a related field (preferred)

Additional Information

  • Contract-based role with the potential to convert to a full-time position
  • Immediate start
  • Opportunity to work on a high-impact AI-based product in the recruitment space
company icon

Skill Spotters

calendar icon

Today

Network Engineer

Lunaseed

Lunaseed is a next-gen AI-powered fintech platform that connects vetted startups with investors transforming fundraising from months to minutes.

We believe raising capital should be fast, simple, and trusted not complicated. Our mission is to revolutionize startup investing with automation, validation, and real-time deal flow.

Network Engineer (2 Positions)

Location: San Francisco, CA

Type: (Remote) (Full-time)

Working Days: Monday - Saturday (Indian Time zone)

Salary: INR 2,00,000 per month

Work Start Date: 1st July, 2025

Role Summary:

We've built the foundation. Now we need Network Engineers to secure, optimize and finalize our AWS-based infrastructure in preparation for public launch across web and mobile platforms.

Roles & Responsibilities:

  • Audit and harden existing AWS networking setup (VPCs, security groups, routing, DNS, IAM policies)
  • Collaborate with backend engineers to prepare for user load, data security, and scaling thresholds
  • Implement traffic monitoring and automatic alerting tools (CloudWatch, Guard Duty, custom dashboards)
  • Conduct pen-testing simulations and close any remaining security gaps before go-live
  • Finalize DNS, SSL, and domain routing for production launch on both web and mobile
  • Coordinate with the team to implement global CDNs and optimize latency for real-time AI workflows

At Lunaseed, we're not just building a product ,we're revolutionizing startup fundraising to make it faster, smarter, and more trusted. If you're passionate about high impact work, thrive in fast-paced environments, and want to shape the future of fintech with cutting-edge AI and global reach, we'd love to have you on our team.

company icon

Lunaseed

calendar icon

Today

Software Engineer

True North Marine Inc., a division of the Accelleron Industries, is a consultancy dedicated to assisting vessel operators to ensure that their voyages are undertaken in the safest and cost-effective manner. We offer navigational assistance to diverse client base globally in the Maritime Industry. We are the only company that offers Weather routing/Performance Monitoring for Ocean vessels in Canada. Our Office is located in Montreal & India.

Job description:

We are looking for an enthusiastic Application Developer to join our new development team. We implement advanced weather routing and ship performance analysis software. An ideal candidate would be a professional/programmer who can work on software implementation in an Agile environment under the supervision of the Software Development Manager. We are looking for someone who wants to adopt the business field, be a team player and contribute to the development of this ambitious project.

Your responsibilities:

  • Assist users on our in-house software.
  • Make corrections to the existing code according to the problems found by the users
  • Implement quality assurance activities (ex: unit tests, integration tests, etc.)
  • Participate in our Daily Scrum meeting
  • Perform other related duties

Your background:

  • About 2-5 years of experience in application development (web oriented is an asset)
  • Have worked on applications in the cloud
  • Autonomous, attentive to details and be an active collaborator
  • Have excellent communication skills in French and/or in English
  • Strong knowledge of C # and ASP. Net
  • Strong knowledge of SQL Server
  • Strong knowledge of JavaScript
  • Experience in developing unit tests is an asset

Your benefits:

  • Competitive Compensation Package
  • Hybrid work model available upon completion of the initial training period
  • Inclusive and Collaborative Work Culture fostering diversity and innovation
  • Reimbursement of Training and Certification Expenses to support professional growth
  • Additional Perks and Benefits to enhance work-life balance!

company icon

True North Marine an Accelleron Company

calendar icon

Today

DevOps Engineer

Who are we:

Turbostart is not just a startup fund and accelerator, we are a catalyst for builders and a powerhouse of innovation. Our mission is to propel early-stage startups into the future by providing unparalleled support in technology, marketing, strategy, and beyond. We're in the business of building tomorrow's leaders - today. After 5 Years and 5 Funds we have supported over 50 startups, spanning sectors, stages and geographies - and this is just the beginning!

Turbostart spans India, the Middle East, the US as well as Singapore - giving you the opportunity to gain exposure and see the impact of your work ripple across regions. Turbostart has also launched 5 Centers of Excellence across Tech, Marketing, Sales, UI/UX and Investment Banking to support the growth of our startup network.

Know more about us on

Turbostart Technology Development Centre (T2C) is a leading Center of Excellence within the Turbostart ecosystem. Turbostart is a prominent startup accelerator and investment firm dedicated to fostering innovation and supporting the growth of promising startups. T2C plays a pivotal role in this mission by providing cutting-edge technology solutions and expertise to all portfolio companies, as well as taking on individual technology projects and product development.

Know more about us on

What we are looking for:

Role: DevOps Engineer

We are seeking a highly skilled DevOps Engineer with experience in implementing and optimizing DevOps practices within software development. As a DevOps Engineer, you will play a pivotal role in analyzing user requirements and business objectives, designing and implementing CI/CD pipelines, automating deployment processes, and ensuring efficient integration of development and operations teams. The role requires a deep understanding of technical documentation and user assistance material, along with excellent written and oral communication skills.

Experience Required: 2-3 years hands-on experience in DevOps practices, including CI/CD implementation, infrastructure automation, and cloud platform utilization.

Key Responsibilities:

  • Design and implement CI/CD pipelines on Azure and AWS platforms using tools such as Azure DevOps and AWS CodePipeline.
  • Manage and configure Kubernetes clusters, specifically AKS (Azure Kubernetes Service) and EKS (Amazon Elastic Kubernetes Service).
  • Setup networking solutions within cloud environments, including VPC (Virtual Private Cloud) and VNet (Virtual Network).
  • Implement monitoring strategies using tools like Grafana, Prometheus, and Loki to ensure optimal system performance.
  • Maintain source control using GitHub and facilitate collaboration among team members.
  • Build and deploy containerized applications using Docker for efficient deployment and scaling.
  • Troubleshoot and administer Linux systems to ensure stability and security.
  • Design, develop, and maintain Helm charts to streamline Kubernetes application deployments.
  • Create Terraform scripts to automate infrastructure setup and provisioning based on project requirements.

Skills and Qualifications:

  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • Strong understanding of CI/CD concepts and experience with tools such as Azure DevOps, AWS CodePipeline, Jenkins, etc.
  • Familiarity with Kubernetes orchestration and management, particularly AKS and EKS.
  • Proficiency in cloud networking concepts and configurations (VPC, VNet, subnets, security groups, etc.).
  • Hands-on experience with monitoring and logging tools like Grafana.
  • Solid understanding of Docker and container orchestration.
  • Experience in Linux system administration and shell scripting.
  • Knowledge of version control systems (e.g., Git, GitHub) and their workflows.
  • Familiarity with infrastructure as code (IaC) tools like Terraform for automating cloud infrastructure deployments.
  • Strong problem-solving skills and ability to work collaboratively in a team environment.
  • Excellent communication skills and ability to document processes and procedures effectively.
company icon

Turbostart

calendar icon

Today

Senior Staff Machine Learning Engineer

About Netradyne

Founded in 2015, Netradyne is a technology company that leverages expertise in Artificial Intelligence, Deep Learning, and Edge Computing to bring transformational solutions to the transportation industry. Netradyne's technology is already deployed in thousands of vehicles; and our customers drive everything from passenger cars to semi-trailers on interstates, suburban roads, rural highways-even off-road.

Netradyne is looking for talented engineers to join our Analytics team comprised of graduates from IITs, IISC, Stanford, UIUC, UCSD etc. We build cutting edge AI solutions to enable drivers and fleets realize unsafe driving scenarios in real-time to prevent accidents from happening and reduce fatalities/injuries.

Role and Responsibilities

You will be embedded within a team of machine learning engineers and data scientists; responsible for building and productizing generative AI and deep learning solutions. You will:

  • Design, develop, and evaluate generative AI models for vision and data science tasks.
  • Collaborate with cross-functional teams to integrate AI-driven solutions into business operations.
  • Build and enhance frameworks for automation, data processing, and model deployment.
  • Develop and deploy AI agents, including Retrieval-Augmented Generation (RAG) systems.
  • Utilize Gen-AI tools and workflows to improve the efficiency and effectiveness of AI solutions.
  • Conduct research and stay updated with the latest advancements in generative AI and related technologies.

Requirements:

  • B. Tech, M. Tech or PhD in computer science, electrical engineering, statistics or math.
  • At least 8 years of working experience in data science, computer vision, or related domain.
  • Proven experience with building and deploying generative AI solutions.
  • Strong programming skills in Python and solid fundamentals in computer science, particularly in algorithms, data structures, and OOP.
  • Experience with Gen-AI tools and workflows.
  • Proficiency in both vision-related AI and data analysis using generative AI.
  • Experience with cloud platforms and deploying models at scale.
  • Experience with transformer architectures and large language models (LLMs).
  • Familiarity with frameworks such as TensorFlow, PyTorch, and Hugging Face.
  • Proven leadership and team management skills.

Desired Skills:

  • Working experience with AWS is a plus.
  • Knowledge of best practices in software development, including version control, testing, and continuous integration.
  • Working knowledge of common industry frameworks and tools around building LLMs, such as OpenAI, GPT, BERT, etc.
  • Experience with MLOps tools and practices for continuous deployment and monitoring of AI models.

company icon

Netradyne

calendar icon

Today

Azure Cloud Engineer

Job Summary:

We are seeking a highly skilled and proactive Azure Cloud Engineer to join our platform engineering team. The ideal candidate will have deep expertise in Microsoft Azure, DevOps methodologies, and Infrastructure as Code (IaC) using BICEP. You will play a key role in designing, building, and maintaining scalable, secure, and resilient cloud infrastructure to support our mission-critical applications.

Key Responsibilities:

  • Design, implement, and manage Azure-based infrastructure using BICEP and other IaC tools.
  • Collaborate with development and operations teams to build and maintain CI/CD pipelines for automated deployments.
  • Manage and monitor Azure services such as Azure Functions, App Services, AKS, Azure Storage, and Azure Monitor.
  • Ensure high availability, performance, and scalability of cloud solutions.
  • Maintain version control and collaborate on code using Git.
  • Support containerized workloads using Docker and Kubernetes (AKS experience is a plus).
  • Troubleshoot infrastructure issues and provide robust solutions in a timely manner.
  • Ensure adherence to security, compliance, and best practices in cloud environments.
  • Document cloud architecture, configurations, and procedures clearly for team collaboration and knowledge sharing.

Required Skills & Qualifications:

  • 5+ years of hands-on experience in cloud engineering with a strong focus on Microsoft Azure.
  • Proven experience in working with Azure services including Functions, App Services, AKS, Storage, and Monitor.
  • Strong expertise in DevOps practices and building/managing CI/CD pipelines.
  • Proficiency in BICEP for infrastructure provisioning (ARM template experience is a plus).
  • Experience with Git and automation/scripting tools.
  • Familiarity with containerization technologies like Docker and orchestration using Kubernetes.
  • Strong analytical and problem-solving skills.
  • Excellent communication and team collaboration abilities.

company icon

eJAmerica

calendar icon

Today

AI/Data Science & Cloud Engineer (Part-Time, Equity-Based Future Full-Time with Salary)

About the job

Remote / (PAN India) Part-Time Equity-Based Initially

About Us:

Guard-N-Gel Ltd is a mission-driven mental health startup empowering university students and young adults through AI-driven, stigma-free, personalized well-being tools. Our digital mental health app combines emotional intelligence, smart tracking, and access to professional support - all powered by cutting-edge AI and cloud technologies.

We're now building our tech core - and looking for a passionate AI/Data Science & Cloud Engineer to join us on this transformative journey. You'll help us build the brain behind the app.

Your Role

As our AI/Data Science & Cloud Engineer, you'll work directly with the founding team and psychologists to:

  • Design and build scalable AI/ML models for early detection of mental health risks
  • Work with NLP, sentiment/emotion analysis, and personalization algorithms
  • Optimize cloud architecture (AWS, GCP or Azure) for secure and scalable app deployment
  • Develop data pipelines for continuous learning and mental health insights
  • Collaborate on MVP prototyping, testing, and user validation with real students
  • Ensure best practices in data privacy, ethics, and compliance (GDPR, HIPAA if applicable)

We're Looking For

  • 2+ years of experience (or solid portfolio) in AI/ML/Data Science
  • Strong skills in Python, TensorFlow/PyTorch, Pandas, scikit-learn
  • Hands-on experience with cloud services (AWS/GCP/Azure), Docker/Kubernetes is a plus
  • Experience with NLP, emotion detection, recommender systems - highly valued
  • A passion for solving real-world problems, especially in mental health
  • Willingness to grow with the startup - part-time with equity, evolving into a full-time salaried role

What We Offer

  • Meaningful equity in an early-stage health tech startup
  • Work on a purpose-driven mission that impacts student mental health
  • Flexible remote work culture
  • Opportunity to grow into a founding tech leadership role
  • Future full-time salary + benefits as we raise funding

Interested?

Let's build something meaningful together. Apply via LinkedIn or email your CV and a short note to .

company icon

GuardNGel LTD

calendar icon

Today

Big Data Engineer

Hi All,

Our client in Noida is urgently hiring for an experienced Big Data Developer

Experience - 7 Years+

Location - Noida 5 Days work from office

Shift Time - Up to 8 PM IST

NP - Immediate to 30 Days

Big Data Engineer: Hadoop is a preferred skill but not mandatory.

Must-have skills: Snowflake or Redshift, AWS (Glue, Lambda, EMR, etc.), and strong SQL.

Education and Experience:

Bachelors degree in computer science, or similar technical field

Minimum 5 years of experience in Big Data Engineering/Data Analysis

Preferred Skills and Qualifications:

Proficiency in Python, SQL and Apache Spark

Strong Experience on Snowflake

AWS services such as EMR, Glue (serverless architecture), S3, Athena, IAM, Lambda and

CloudWatch

Core Spark, Spark Streaming, DataFrame API, Data Set API, RDD APIs & Spark SQL

programming dealing with processing terabytes of data

Advanced SQL using Hive/Impala framework including SQL performance tuning

Expertise in Hadoop, and other distributed computing frameworks

ElasticSearch (OpenSearch) and Kibana Dashboards

Resource management frameworks such as Yarn or Mesos.

Physical table design in Big Data environment

External job schedulers such as autosys, AWS data pipeline, airflow etc.

Experience working in Key/Value data store such as Hbase

Data Analyst:

Skills, Experience and Requirements

Minimum of 5 years of experience delivering data solutions on a variety of data warehousing

platforms.

Experience with SQL, Python and Excel in a business environment required

2+ years of experience with Tableau developing calculated fields

Experience with data cleaning and blending multiple data sources using ETL tools

Experience in cloud technologies (like AWS) and data analytics platforms (ex: Redshift and

Snowflake)

Experience in delivering of self-service analytics, insights, KPIs and metrics

Experience building and maintaining data catalog with dictionary

Experience with scheduling tools like Control-M or Airflow

Strong understanding and experience with data warehousing concepts required

Experience manipulating large datasets in a business environment required

Experience translating data to actionable insights required

Excellent facilitation skills; Strong desire to communicate in all settings

Apply objective, analytical, and orderly thinking to the analysis of complex problems

Embrace the Adventure; Willing to take risks, try innovative approaches, be adaptable to

change, and learn from failures

Demonstrate Curiosity; Strong desire to learn and add value to all aspects of the Dish

business

Education and Experience:

Bachelors Degree with 5 years of experience in a relevant field such as Business operations,

Analytics and Computer science; Master & Degree a plus

company icon

Empresent Global LLP

calendar icon

Today

Quality Assurance engineer - Playwright

About Client:

Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters.

  • Job Title : Quality Assurance engineer - Playwright
  • Key Skills : Automation Testing Playwright, Java, cucumber.
  • Job Locations : Hyderabad, Chennai, Bangalore, Pune, Kolkata
  • Experience : 9+ Years.
  • Education Qualification : Any Graduation.
  • Work Mode : Hybrid.
  • Employment Type : Contract.
  • Notice Period : Immediate

Job Description:

Minimum 7+ years of Playwright, Java, BDD cucumber,

Strong in Java API testing/ Automation is a must PostMan/ RestAssured/ SOAP/ Karate or similar.

Hands on Experience with Test automation frameworks Selenium, Playwright, Cypress or similar.

Strong Coding skills Java/ Jscript, Core Java, Typesctipt or similar.

Ability to Develop, design and build test automation frameworks from scratch choosing the right frameworks based on project needs

Cross browser testing / automation using SauceLabs/ BrowserStack or similar.

Ability to work in a flexible/ fast paced Agile environment.

Strong communication skills, problem solving skills.

Define and report Quality Engineering metrics.

Lead and mentor a team of quality engineers.

Hands on Experience on SQL

company icon

People Prime Worldwide

calendar icon

Today

Technical Engineer - Anti-Corrosion Carbon Steel Pipe

Company Description

TaxKitab is a leading tax and financial consultancy firm based in Pune, Maharashtra, India, offering comprehensive solutions for taxation, compliance, and financial needs to clients across India and worldwide. Our team of experienced professionals helps businesses and individuals navigate the complexities of the financial landscape with ease and confidence. Our services range from GST and income tax consultancy to company registration, ROC compliance, trademark registration, and startup consultancy. We also offer part-time and full-time accounting and bookkeeping services. We take pride in delivering exceptional service and personalized solutions to meet the unique needs of our clients, making us a trusted partner for businesses of all sizes.

Role Description

This is a full-time on-site role located in Mumbai for a Technical Engineer - Cooling & Waterproofing System. The Technical Engineer will be responsible for designing, implementing, and maintaining cooling and waterproofing systems. Day-to-day tasks will involve conducting site inspections, preparing technical drawings, working on project planning and execution, ensuring compliance with safety standards, and providing technical support to the project teams. Coordination with clients, vendors, and other stakeholders is also an essential aspect of this role, aimed at ensuring quality and timely project delivery.

Qualifications
  • Experience in designing and implementing cooling and waterproofing systems
  • Proficiency in using technical drawing software and preparing technical documentation
  • Strong project planning and execution skills
  • Knowledge of safety standards and compliance regulations
  • Excellent communication and coordination skills
  • Ability to work on-site in Mumbai
  • Bachelor's degree in Engineering or related field
  • Experience in the construction or consultancy industry is a plus
  • Problem-solving skills and attention to detail
company icon

TaxKitab

calendar icon

Today

Senior Java Software Engineer

Job Title: Senior Java Developer

Location: Kochi (Ernakulam H.O, Kerala, 682011) / Thiruvananthapuram (TVM, Kerala, 695001)

Experience Required: 7+ Years

Notice Period: Immediate Joiners Preferred

Compensation: Up to 25 LPA

About the Role:

We are looking for an experienced Senior Java Developer to join our dynamic technology team. This role requires strong expertise in backend development using Java, Spring Boot, and Microservices architecture, along with solid experience in cloud platforms (AWS).

As a Senior Developer, you will play a key role in designing, developing, and deploying robust, scalable, and high-performance applications.

Key Responsibilities:

Design, develop, and maintain enterprise-level applications using Java and Spring Boot.

Build scalable microservices-based architectures.

Work with AWS Cloud services for application deployment, monitoring, and scaling.

Collaborate with cross-functional teams including product owners, architects, and testers in an Agile environment.

Write clean, efficient, and well-documented code following best practices.

Participate in code reviews and contribute to team knowledge sharing.

Troubleshoot and resolve technical issues in development and production environments.

Ensure performance, security, and responsiveness of applications.

Required Skills & Qualifications:

7+ years of hands-on experience in Java development.

Strong expertise in Spring Boot framework.

Solid experience with Microservices architecture & development.

Good knowledge and working experience with AWS Cloud (EC2, S3, Lambda, etc.).

Strong understanding of RESTful APIs and Web Services.

Familiarity with Agile/Scrum methodologies.

Good problem-solving and analytical skills.

Strong communication and collaboration abilities.

company icon

Velodata Global Pvt Ltd

calendar icon

Today