Dutch philosopher Desiderius Erasmus once said that prevention is better than a cure, and the same applies to online security. Detecting threats before they can cause damage is the most effective way to mitigate them. "AI is no longer a future technology; it is an essential part of today's digital ecosystem," asserts Aravindh Manickavasagam, a seasoned technical program manager with expertise in artificial intelligence (AI). As the digital landscape expands, the need for robust harm-detection and trust and safety mechanisms has become critical. Manickavasagam's role in scaling these systems highlights the intersection of advanced technology and the pressing need for online safety.
Manickavasagam's initiatives in developing reliable harm detection systems were most prominent during his time at Meta. "To scale harm detection, we implemented a comprehensive machine learning (ML) infrastructure to handle vast amounts of content in near real time," shares Manickavasagam.
Building the Foundation: Data Labels, Model Training, and Calibration
According to Manickavasagam, the approach to any ML classification problem for harm detection requires a meticulous, quality-controlled data labeling process. "Aggregating large datasets and employing quality-controlled human labeling to annotate harmful content is key," explains Manickavasagam. This human-in-the-loop methodology ensures that the machine learning (ML) models are trained with high-quality and accurate data, enabling them to effectively identify various types of harmful content, including graphic violence.
The subsequent phases involve building the actual model, model training, validation, fine-tuning, and, most importantly, model calibration. "When working with classification problems, machine learning models often produce a probabilistic outcome ranging between 0 to 1. This probabilistic output is then used by people or platforms to make decisions. Unfortunately, many machine learning models' probabilistic outputs cannot be directly interpreted as the probability of an event happening. To achieve this outcome, the model needs to be calibrated," adds Manickavasagam.
Harnessing ML for Improved User Experience
In today's digital landscape, prioritizing user experience while maintaining trust and safety is paramount. Ensuring that users feel safe and have control over their interactions with content fosters a trustworthy environment. Balancing these elements is essential to creating a platform that users can rely on and enjoy.
Manickavasagam coordinated the backend machine learning infrastructure for Facebook and Instagram's Harm Detection Systems, a program that supported harm reduction across Facebook and Instagram for over one billion monthly active users. He managed projects like launching the Machine Learning backend for Instagram's sensitive content control feature, which allows users to manage the visibility of sensitive content on their Explore page. Default restrictions are in place for users under 18, while those over 18 can customize their settings to create a safer user experience by filtering out content such as graphic violence and other explicit materials based on community guidelines. This feature aims to provide teens with more age-appropriate social media experiences without compromising the enjoyable aspects of being online.
The fundamental infrastructure categorizes content across numerous categories, including violence, sexually explicit content, regulated products, cosmetic procedures, and health-related claims. It also allows for finer-grained control, enabling specific content to be filtered out of recommendation systems based on user preferences or profile requirements such as age-based defaults.
Managing programs like these require coordination with a vast number of stakeholders, engineers, policy specialists, data scientists, product managers, and extensive A/B testing to certify the system's effectiveness.
The Importance of Real-World Testing and Continuous Monitoring
Once the models are optimized, they can be integrated into online systems such as content recommendation systems for real-time filtering of harmful content, shielding users from exposure. "Deployment phases are critical," notes Manickavasagam. "Models' offline performance may not be the same as online performance, so A/B testing and launch reviews are critical for success."
A/B testing involves comparing two versions of a system to determine which one performs better. A/B testing is crucial for validating the effectiveness of new models and features before they are fully deployed. By systematically testing different variations, the team can ensure that changes positively impact user experience and safety. "A/B testing allows us to measure the real-world impact of our models," explains Manickavasagam. "It provides data-driven insights that guide our decisions and help us refine our systems continuously."
By prioritizing rigorous A/B testing and continuous model enhancement, teams can ensure that trust and safety systems not only meets but exceeds the standards for accuracy and reliability, ultimately fostering a safer and engaging user experience.
The Leading Role of a Technical Program Manager
As a technical program manager, Manickavasagam's responsibilities are multifaceted. He leads cross-functional teams, driving collaboration among engineers, data scientists, and product managers to deliver sophisticated ML platforms and products. "Strategic planning is at the heart of what we do," he emphasizes. "Compiling roadmaps, technical system designs, managing execution, and fostering stakeholder partnerships are essential to our success."
Maintaining high standards is another critical component of his role. Ensuring that the ML products adhere to stringent standards is paramount to safeguarding the platform's reputation.
Online Shields: Industry Impact and Future Prospects
The implementation of these advanced harm detection systems has far-reaching effects. Reflecting on the journey, Manickavasagam has high hopes for the future of his craft. "At the end of the day, our motivation is creating a safer online environment for all users," he concludes. "With recent advancements in Gen AI, our capabilities to innovate in this space are boundless."
In an era where digital safety is paramount, the efforts of AI professionals like Aravindh Manickavasagam are essential in providing the best shield against online harm.