Job Title: AI Security Researcher
Company Overview:
Pulivarthi Group is a premier global provider of staffing and IT technology solutions, renowned for delivering exceptional services tailored to each client's unique needs. With a steadfast commitment to excellence, we merge expertise with innovation, ensuring cost-effective solutions of the highest quality. Our diverse client base spans healthcare, finance, government, and beyond, reflecting our adaptability and proficiency across industries. Operating in the United States, Canada, and Mexico, we pride ourselves on aligning with clients' cultures, deploying top-tier talent, and utilizing cutting-edge technologies. Pulivarthi Group stands as a beacon of reliability, efficiency, and innovation in the realm of staffing solutions.
Job Overview/Summary:
We are seeking an AI Security Researcher to join our dynamic team. This role involves assessing AI/ML systems for security vulnerabilities, conducting adversarial research, developing robust defenses, and contributing to the safe and ethical deployment of AI technologies.
Responsibilities:
Threat Modeling & Risk Assessment
-
Identify and analyze security risks across the AI/ML lifecycle
-
Perform threat modeling on models, data pipelines, and APIs
-
Assess risks from adversarial examples, model theft, data poisoning, and prompt injection
Adversarial Machine Learning (AML) Research
-
Design and implement adversarial attacks to test model robustness
-
Develop and evaluate defenses such as adversarial training and robust optimization
-
Explore vulnerabilities in generative models (e.g., LLMs, GANs)
Red Teaming & Penetration Testing
-
Simulate real-world attacks on AI systems
-
Conduct prompt injection, data poisoning, model inversion, and membership inference attacks
-
Collaborate with red/blue teams to enhance system resilience
Secure Model Development
-
Apply security best practices in model development and deployment
-
Ensure data provenance, confidentiality, and integrity
-
Support federated learning and privacy-preserving methods (e.g., differential privacy, homomorphic encryption)
Monitoring & Incident Response
-
Detect and respond to AI-specific threats in production
-
Build tools to monitor AI behavior and detect anomalies
-
Investigate security incidents involving AI misuse
Research & Development
-
Stay abreast of AI/ML security research
-
Publish papers, contribute to open-source projects, and write technical blogs
-
Collaborate with engineers, product teams, and policy experts
Governance, Compliance & Ethical Oversight
-
Ensure compliance with standards like GDPR and NIST AI RMF
-
Conduct security audits and contribute to governance frameworks
-
Address dual-use risks and promote responsible AI
Primary Skills:
-
Adversarial ML research and implementation
-
Threat modeling and risk analysis in AI systems
-
Strong Python programming skills
-
Experience with security and privacy tools (CleverHans, IBM ART, Opacus)
Secondary Skills (Good to Have):
-
Experience with generative models (LLMs, GANs)
-
Familiarity with compliance frameworks (GDPR, NIST AI RMF)
-
Kubernetes, Docker, and cloud security (AWS, GCP, Azure)
-
C/C++ for low-level attack development
Qualifications:
-
Bachelor’s or Master’s degree in Computer Science, Cybersecurity, AI/ML, or related fields
-
2+ years experience in AI/ML security or adversarial research
-
Demonstrated contributions to AI security research (e.g., publications, open-source)
Benefits/Perks:
-
Competitive salary and performance bonuses
-
Health, dental, and vision insurance
-
Paid time off and flexible working hours
-
Opportunities to attend top AI/security conferences
-
Collaborative, innovation-driven environment
Equal Opportunity Statement:
Pulivarthi Group is proud to be an equal opportunity employer. We are committed to building a diverse and inclusive culture and celebrate authenticity. We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity, sexual orientation, age, marital status, disability, protected veteran status, or any other legally protected characteristics.