As we enter what is often called the “fourth industrial revolution,” artificial intelligence (AI) and machine learning are transforming productivity, efficiency, and governance across sectors. In response, the Ministry of Information Technology and Telecommunication has proposed a National Artificial Intelligence Policy. This policy aims to promote AI’s use and development while briefly acknowledging the need for ethical practices to protect users’ rights and privacy. However, given the current issues with cyber safety, particularly for women, these promises seem overly optimistic.
Understanding AI’s Impact on Gender-Based Violence
AI encompasses technologies that enable computers to mimic human learning and decision-making. Tools like machine learning and deep learning analyze data to identify patterns and generate responses. ChatGPT is one widely used generative AI tool that can create text and images based on user inputs, improving business efficiency and decision-making.
Despite its benefits, AI also has a darker side. Deepfake technology, a type of generative AI, can create convincing but false images and videos. Studies show that 98% of deepfake videos are pornographic, with 99% targeting women or girls. This technology contributes to technology-facilitated gender-based violence (TFGBV), including image-based abuse, blackmail, misinformation, impersonation, cyberstalking, and threats.
The Gendered Nature of Cyber Violence
Research indicates that women are more frequently victims of cyber violence than men. For instance, 26% of women aged 18-24 experience cyberstalking, compared to only 7% of men in the same age group. This highlights the greater risk women face in online spaces.
AI has intensified these risks by enhancing the realism of misinformation and fake news. Generative AI can create convincing but false personal histories and cyber harassment templates, perpetuating misinformation and violating women’s dignity, privacy, and rights.
Global and Local Efforts to Regulate AI
Efforts to regulate AI are emerging worldwide. The EU Artificial Intelligence Act, 2024, emphasizes safety and fundamental rights, prohibiting biometric surveillance and mandating transparency in deepfake content. UNESCO’s Recommendations on the Ethics of Artificial Intelligence and the US Blueprint for an AI Bill of Rights also stress protecting human rights in AI development and use.
Pakistan’s AI policy acknowledges the dangers of AI-generated fake content and aims to address disinformation and data privacy breaches. However, current mechanisms, like the Cybercrime Wing of the FIA, have been criticized for their ineffectiveness in handling complaints of TFGBV under the Prevention of Electronic Crimes Act, 2016.
Moving Forward
Effective regulation is crucial to address AI-led TFGBV and uphold ethical standards. The ARD or the National Cybercrime Investigation Agency must implement robust measures to combat these offenses and protect human rights.