Meta Platforms Inc. announced a significant 32% rise in the return on ad spending for its apps, crediting the improvement to the incorporation of artificial intelligence (AI).
At the World Economic Forum in Davos, global business group leader Nicola Mendelsohn shared insights, indicating that the implementation of AI-powered ads not only yielded improved financial outcomes but also resulted in a 17% reduction in the cost of acquisition, according to Bloomberg. Furthermore, the utilization of AI tools streamlined the campaign creation process significantly, reducing the setup steps from 15 to 30 to a single button.
Last year, Meta introduced new generative AI tools tailored for advertisers, featuring capabilities for rapid image and text creation along with predictive performance analysis for ads. Meta made these adjustments to improve speed and accommodate Apple Inc.'s privacy changes.
Participants of the World Economic Forum (WEF) annual meeting walk past a booth of Meta, the company that owns and operates Facebook, Instagram, Threads, and WhatsApp, in Davos on January 18, 2024.
Ban on Political Ads
Last year, Meta restricted the usage of its generative AI advertising tools for political campaigns, housing, jobs, credit, social concerns, health, medicines, and financial services globally. Applicable worldwide, this policy is an extension of Meta's decision to prohibit advertisers from employing generative AI tools in Ads Manager for sensitive categories, per ZD Net.
Dan Neary, Meta's Asia-Pacific vice president, emphasized the company's commitment to understanding potential risks and implementing appropriate safeguards for the use of generative AI in ads related to regulated industries. The restrictions precede upcoming elections in various nations, including Indonesia, India, the United States, Finland, Pakistan, and Taiwan.
Pushing for AI Safety
Additionally, Meta last year launched "Purple Llama," a project aiming to provide open-source tools for developers to assess and enhance the reliability and safety of generative AI models before public deployment. Recognizing the collaborative nature required for ensuring AI safety, Meta aims to create a shared foundation through Purple Llama to address concerns surrounding large language models and other AI technologies.
Purple Llama's collaborative effort involves partnerships with AI developers, cloud services (AWS, Google Cloud), semiconductor companies (Intel, AMD, Nvidia), and software firms (Microsoft). The initiative seeks to deliver tools for research and commercial use to assess AI models' capabilities and identify safety risks, according to InfoWorld.
Among the initial tools released, CyberSecEval stands out, evaluating cybersecurity risks in AI-generated software. It incorporates a language model capable of identifying inappropriate or harmful text, assisting developers in testing their AI models for potential vulnerabilities.
Additionally, Llama Guard, another tool in the suite, is a language model trained to detect potentially harmful or offensive language, aiding developers in filtering unsafe content and preventing inappropriate outputs. The tools underline the significance of continuous testing and improvement for ensuring AI security.
Related Article T: he US Navy's Robotic Submarine 'Orca' is Ready for Stealthy Missions