IT Brief Asia - Technology news for CIOs & IT decision-makers
Flux result 8713b6e5 161c 4da5 abc1 a24ffb603f4c

Protegrity hires 10-plus AI & data protection experts

Tue, 21st Apr 2026 (Yesterday)

Protegrity has expanded its team with more than 10 hires focused on artificial intelligence and data protection, including senior leadership roles across technology, product, engineering and machine learning.

New recruits include Sameer Tiwari as Chief Technology Officer; Milan Chutake as Vice President of Engineering; Jessica Hammond as Senior Director of Project Management, GenAI; Averell Gatton as Director of GenAI; and Auria Moore as Director of Digital Ecosystem. The company also appointed Saravana Krishnamurthy as Senior Vice President of Product Management, Greg Stout as Vice President of AI engineering, and Brandon Burge as Senior Director of Strategy and TechOps and Chief of Staff to the Chief Technology Officer.

Other additions span privacy engineering, product management and machine learning. They include Luis Santos as Director of AI Privacy Engineering, Jake Henri as Senior Principal GenAI Security Engineer, Hardeep Bassi as Lead Machine Learning Engineer, Luís M. V. Seabra as Senior Machine Learning Engineer, GenAI, Obinna Nwokonkwo as Lead Machine Learning Engineer and Arunima Sharma as Product Manager, GenAI.

The hiring push comes as companies move from testing artificial intelligence systems to using them in day-to-day business, increasing scrutiny of how sensitive data is handled. Protegrity, which specialises in data protection and privacy software, is looking to strengthen its position in AI security through these technical and product leadership appointments.

Tiwari brings experience from Meta, MariaDB and Salesforce, where he worked on distributed systems, cloud and database technologies. Chutake previously held senior leadership roles at Yahoo and LinkedIn, while Hammond has worked in B2B software and secure AI product development.

Gatton joins with a background in machine learning, physics and scientific computing, including work at MariaDB and MindsDB. Moore founded ClevrAI and has worked on software, AI products and digital transformation projects for large corporate customers.

Several hires also reflect an emphasis on combining product and engineering work with privacy and compliance requirements. Santos will focus on AI privacy engineering, drawing on experience in anonymisation, synthetic data and privacy-enhancing technologies, while Sharma has worked on AI security and identity and access management.

Henri adds a security perspective shaped by work on AI guardrail systems, including a previous role at Toyota. Seabra and Nwokonkwo bring research backgrounds in physics, chemistry and data analysis to machine learning work involving protected data processing.

Growth focus

The enlarged team is intended to support Protegrity's next phase of growth in AI data protection and security. The appointments span product strategy, technical infrastructure, machine learning development and partnership building, indicating a broad effort to develop new tools while adapting existing data protection products for AI use cases.

That approach reflects a wider shift in the software market. As more companies deploy generative AI in customer service, internal search, software development and workflow automation, vendors are placing greater emphasis on data access controls, audit trails and privacy measures.

For established data security providers, this creates pressure to show they can adapt long-standing protection methods to newer AI systems. It also increases the need for engineering teams that understand both machine learning and the regulatory obligations attached to corporate data.

Protegrity's latest hires suggest it wants to build that mix internally rather than rely solely on partnerships or acquisitions. The backgrounds of several recruits from database companies, cloud firms and AI start-ups point to a focus on practical deployment rather than pure research.

Chief Executive Officer Michael Howard outlined that view in comments accompanying the appointments. "The difference between being an AI security company and being a fake AI security company comes down core competence, opportunity to explore new use cases, and most important, AI talent," Howard said.

He added: "With these new experts onboard, on top of our core competence in data security, plus working on bleeding edge use case, like semantic tokenization on Knowledge Graphs, we are ahead of the pack, with an audacious future before us."