Close

Request Demo

BUFFERZONE is available to Enterprise companies only. Please fill out the form below and we’ll contact you shortly


    Blog

    Back

    Bypassing Object Detection: The Rise of Logo Manipulation in Phishing Attacks

    By BUFFERZONE Team, 18/04/2024

    Target: IT Professionals (Elementary)

    Tags: Phishing, Safe Workspace®, Safe Browsing, NoCloud ™ Anti-Phishing, Object Detection

    In the ever-evolving cybersecurity landscape, phishing is still one of the oldest yet most effective tools in a cybercriminal’s arsenal. However, as cybersecurity measures grow more sophisticated,
    so do the tactics of these digital adversaries. A particularly cunning evolution in phishing is adversarial attacks, precisely manipulating company logos to slip past AI (Artificial Intelligence) detection defenses.

    The Stealthy Art of Adversarial Attacks

    Adversarial attacks in cybersecurity are techniques that apply subtle alterations to digital content, making it difficult for AI-based detection systems to identify threats without hampering
    human recognition. These manipulations are often indiscernible to the naked eye yet are enough to deceive algorithms to detect phishing attempts. Our earlier blog discusses adversarial
    attacks on login page text information. This blog will focus on the brand logo’s visible attacks.

    The Devil in the Details: Logo Manipulation Techniques

    Logo manipulation involves altering critical visual elements of a logo to avoid detection by security algorithms. Techniques can range from slight color adjustments and pixel-level noise
    addition to more sophisticated geometric transformations. These changes are calculated to preserve the logo’s recognizability to human observers, while the modifications in the image
    are designed to bypass AI-driven object detection.

    This manipulation exploits a critical vulnerability in how security systems and humans process visual information. While people can easily recognize a slightly altered logo, AI systems
    can
    falter, mistaking the tampered logo for a harmless or unrelated image. Recent publications concluded the following observations:

    1. “Despite abundant evidence showing that ML models are vulnerable, practitioners persist in treating such threats as low priority.”
    2. “To evade phishing ML detectors, attackers employ tactics relying on cheap but effective methods that are unlikely to result from gradient computations.”

    The paper concluded based on the analysis of one hundred phishing pages and summarized the popularity of the following logo attacks:

     

    Evasive Strategy Count Evasive Strategy Count
    Company name style 25 Logo stretching 11
    Blurry logo 23 Multiple forms-images 10
    Cropping 20 Background patterns 8
    No Company names 16 “Log in” obfuscation 6
    No visual logo 13 Masking 3
    Different visual logo 12

    Since those attacks are easy to create, have a high effectiveness ratio, they pose a significant threat on AI detection.

    Technological Countermeasures:

    To overcome adversarial attacks, we recommend the following steps:

    Advanced Detection Algorithms: Researchers continuously develop sophisticated AI models with more robust detection capabilities and lower prediction latency.
    In addition, researchers are working on state-of-the-art algorithms to enhance the protection against adversarial attacks. One of those initiatives is the Trust.AI consortium, which
    focuses on improving the AI model’s confidence and resilience against adversarial attacks.

    Adversarial Training: This involves training detection systems using examples of manipulated logos, thereby improving their ability to recognize such alterations.

    Human-Centric Defenses:

    Education and Awareness: Regular employee training sessions can significantly enhance an organization’s defense against phishing. Users may not detect the steganography
    attack but can find other anomalies on the web page, such as text manipulation, abnormal URL names, uncommon top-level domains (TLDs), and the detection of web hosting sites.

    Policy and Procedure: Setting up clear protocols for verifying the authenticity of suspicious emails or websites can prevent phishing attempts from succeeding.

    In future blogs, we will discuss how adversarial try to bypass AI and how AI can overcome these challenges by fighting AI with AI.

    Interested in learning more?

    Contact us